Test Report: KVM_Linux_crio 18427

                    
                      190844ee5aebf41cade975daf7bc7fe77d6b0ce4:2024-03-18:33631
                    
                

Test fail (30/325)

Order failed test Duration
39 TestAddons/parallel/Ingress 159.81
41 TestAddons/parallel/MetricsServer 11.59
53 TestAddons/StoppedEnableDisable 154.31
96 TestFunctional/parallel/DashboardCmd 302.09
172 TestMultiControlPlane/serial/StopSecondaryNode 142.09
174 TestMultiControlPlane/serial/RestartSecondaryNode 60.81
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 373.06
179 TestMultiControlPlane/serial/StopCluster 142.05
239 TestMultiNode/serial/RestartKeepsNodes 309.71
241 TestMultiNode/serial/StopMultiNode 141.71
248 TestPreload 278.72
256 TestKubernetesUpgrade 353.34
330 TestStartStop/group/old-k8s-version/serial/FirstStart 283.06
353 TestStartStop/group/no-preload/serial/Stop 139.1
356 TestStartStop/group/embed-certs/serial/Stop 139.1
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.95
360 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
361 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 111.85
362 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
370 TestStartStop/group/old-k8s-version/serial/SecondStart 747.51
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.42
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.3
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.26
374 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.47
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 393.91
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 373.69
377 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 263.16
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 114.49
x
+
TestAddons/parallel/Ingress (159.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-106685 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-106685 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-106685 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [86b2012e-e452-410b-808c-3fc378157346] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [86b2012e-e452-410b-808c-3fc378157346] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.008017883s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-106685 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.779732158s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-106685 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.205
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 addons disable ingress-dns --alsologtostderr -v=1: (1.772740911s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 addons disable ingress --alsologtostderr -v=1: (7.867287141s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-106685 -n addons-106685
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 logs -n 25: (1.409242587s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-954927                                                                     | download-only-954927 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-091393                                                                     | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-994148                                                                     | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-954927                                                                     | download-only-954927 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-502218 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | binary-mirror-502218                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38477                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-502218                                                                     | binary-mirror-502218 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| addons  | enable dashboard -p                                                                         | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | addons-106685                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | addons-106685                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-106685 --wait=true                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-106685 ssh cat                                                                       | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /opt/local-path-provisioner/pvc-e86d5e17-8190-4e06-8916-09db8624ca3e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-106685 ip                                                                            | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-106685 addons                                                                        | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC |                     |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC | 18 Mar 24 12:48 UTC |
	|         | addons-106685                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC | 18 Mar 24 12:48 UTC |
	|         | addons-106685                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC | 18 Mar 24 12:48 UTC |
	|         | -p addons-106685                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-106685 ssh curl -s                                                                   | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC | 18 Mar 24 12:48 UTC |
	|         | -p addons-106685                                                                            |                      |         |         |                     |                     |
	| addons  | addons-106685 addons                                                                        | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC | 18 Mar 24 12:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106685 addons                                                                        | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:48 UTC | 18 Mar 24 12:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-106685 ip                                                                            | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:50 UTC | 18 Mar 24 12:50 UTC |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:50 UTC | 18 Mar 24 12:50 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:50 UTC | 18 Mar 24 12:50 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:45:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:45:11.015268 1075954 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:45:11.015544 1075954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:45:11.015553 1075954 out.go:304] Setting ErrFile to fd 2...
	I0318 12:45:11.015557 1075954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:45:11.015765 1075954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 12:45:11.016475 1075954 out.go:298] Setting JSON to false
	I0318 12:45:11.017639 1075954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16058,"bootTime":1710749853,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:45:11.017715 1075954 start.go:139] virtualization: kvm guest
	I0318 12:45:11.019865 1075954 out.go:177] * [addons-106685] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:45:11.021600 1075954 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 12:45:11.022909 1075954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:45:11.021689 1075954 notify.go:220] Checking for updates...
	I0318 12:45:11.025577 1075954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:45:11.026988 1075954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:11.028362 1075954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:45:11.029731 1075954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:45:11.031241 1075954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:45:11.064025 1075954 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 12:45:11.065587 1075954 start.go:297] selected driver: kvm2
	I0318 12:45:11.065616 1075954 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:45:11.065631 1075954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:45:11.066336 1075954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:45:11.066438 1075954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:45:11.083338 1075954 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:45:11.083402 1075954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:45:11.083619 1075954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:45:11.083687 1075954 cni.go:84] Creating CNI manager for ""
	I0318 12:45:11.083701 1075954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:11.083710 1075954 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:45:11.083760 1075954 start.go:340] cluster config:
	{Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:11.083895 1075954 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:45:11.085948 1075954 out.go:177] * Starting "addons-106685" primary control-plane node in "addons-106685" cluster
	I0318 12:45:11.087467 1075954 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:45:11.087554 1075954 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:45:11.087569 1075954 cache.go:56] Caching tarball of preloaded images
	I0318 12:45:11.087691 1075954 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:45:11.087705 1075954 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:45:11.088942 1075954 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/config.json ...
	I0318 12:45:11.089080 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/config.json: {Name:mkb075179247883cdc6357e66c091da0632c780c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:11.089636 1075954 start.go:360] acquireMachinesLock for addons-106685: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:45:11.089745 1075954 start.go:364] duration metric: took 83.912µs to acquireMachinesLock for "addons-106685"
	I0318 12:45:11.089770 1075954 start.go:93] Provisioning new machine with config: &{Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:45:11.089870 1075954 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 12:45:11.091687 1075954 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 12:45:11.091991 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:11.092052 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:11.107470 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0318 12:45:11.108112 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:11.108746 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:45:11.108771 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:11.109173 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:11.109368 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:11.109562 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:11.109782 1075954 start.go:159] libmachine.API.Create for "addons-106685" (driver="kvm2")
	I0318 12:45:11.109812 1075954 client.go:168] LocalClient.Create starting
	I0318 12:45:11.109853 1075954 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 12:45:11.382933 1075954 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 12:45:11.835577 1075954 main.go:141] libmachine: Running pre-create checks...
	I0318 12:45:11.835603 1075954 main.go:141] libmachine: (addons-106685) Calling .PreCreateCheck
	I0318 12:45:11.836187 1075954 main.go:141] libmachine: (addons-106685) Calling .GetConfigRaw
	I0318 12:45:11.836711 1075954 main.go:141] libmachine: Creating machine...
	I0318 12:45:11.836728 1075954 main.go:141] libmachine: (addons-106685) Calling .Create
	I0318 12:45:11.836920 1075954 main.go:141] libmachine: (addons-106685) Creating KVM machine...
	I0318 12:45:11.838282 1075954 main.go:141] libmachine: (addons-106685) DBG | found existing default KVM network
	I0318 12:45:11.839122 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:11.838953 1075976 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0318 12:45:11.839174 1075954 main.go:141] libmachine: (addons-106685) DBG | created network xml: 
	I0318 12:45:11.839198 1075954 main.go:141] libmachine: (addons-106685) DBG | <network>
	I0318 12:45:11.839212 1075954 main.go:141] libmachine: (addons-106685) DBG |   <name>mk-addons-106685</name>
	I0318 12:45:11.839227 1075954 main.go:141] libmachine: (addons-106685) DBG |   <dns enable='no'/>
	I0318 12:45:11.839235 1075954 main.go:141] libmachine: (addons-106685) DBG |   
	I0318 12:45:11.839246 1075954 main.go:141] libmachine: (addons-106685) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 12:45:11.839258 1075954 main.go:141] libmachine: (addons-106685) DBG |     <dhcp>
	I0318 12:45:11.839270 1075954 main.go:141] libmachine: (addons-106685) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 12:45:11.839280 1075954 main.go:141] libmachine: (addons-106685) DBG |     </dhcp>
	I0318 12:45:11.839291 1075954 main.go:141] libmachine: (addons-106685) DBG |   </ip>
	I0318 12:45:11.839298 1075954 main.go:141] libmachine: (addons-106685) DBG |   
	I0318 12:45:11.839305 1075954 main.go:141] libmachine: (addons-106685) DBG | </network>
	I0318 12:45:11.839336 1075954 main.go:141] libmachine: (addons-106685) DBG | 
	I0318 12:45:11.844813 1075954 main.go:141] libmachine: (addons-106685) DBG | trying to create private KVM network mk-addons-106685 192.168.39.0/24...
	I0318 12:45:11.916130 1075954 main.go:141] libmachine: (addons-106685) DBG | private KVM network mk-addons-106685 192.168.39.0/24 created
	I0318 12:45:11.916172 1075954 main.go:141] libmachine: (addons-106685) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685 ...
	I0318 12:45:11.916197 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:11.916093 1075976 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:11.916222 1075954 main.go:141] libmachine: (addons-106685) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:45:11.916243 1075954 main.go:141] libmachine: (addons-106685) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:45:12.163608 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:12.163410 1075976 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa...
	I0318 12:45:12.244894 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:12.244720 1075976 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/addons-106685.rawdisk...
	I0318 12:45:12.244935 1075954 main.go:141] libmachine: (addons-106685) DBG | Writing magic tar header
	I0318 12:45:12.244959 1075954 main.go:141] libmachine: (addons-106685) DBG | Writing SSH key tar header
	I0318 12:45:12.244979 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:12.244851 1075976 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685 ...
	I0318 12:45:12.245035 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685
	I0318 12:45:12.245054 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 12:45:12.245068 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685 (perms=drwx------)
	I0318 12:45:12.245084 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:45:12.245091 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 12:45:12.245097 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:12.245106 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 12:45:12.245115 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:45:12.245135 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:45:12.245156 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 12:45:12.245169 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home
	I0318 12:45:12.245184 1075954 main.go:141] libmachine: (addons-106685) DBG | Skipping /home - not owner
	I0318 12:45:12.245196 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:45:12.245205 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:45:12.245215 1075954 main.go:141] libmachine: (addons-106685) Creating domain...
	I0318 12:45:12.246437 1075954 main.go:141] libmachine: (addons-106685) define libvirt domain using xml: 
	I0318 12:45:12.246475 1075954 main.go:141] libmachine: (addons-106685) <domain type='kvm'>
	I0318 12:45:12.246487 1075954 main.go:141] libmachine: (addons-106685)   <name>addons-106685</name>
	I0318 12:45:12.246495 1075954 main.go:141] libmachine: (addons-106685)   <memory unit='MiB'>4000</memory>
	I0318 12:45:12.246503 1075954 main.go:141] libmachine: (addons-106685)   <vcpu>2</vcpu>
	I0318 12:45:12.246514 1075954 main.go:141] libmachine: (addons-106685)   <features>
	I0318 12:45:12.246525 1075954 main.go:141] libmachine: (addons-106685)     <acpi/>
	I0318 12:45:12.246534 1075954 main.go:141] libmachine: (addons-106685)     <apic/>
	I0318 12:45:12.246546 1075954 main.go:141] libmachine: (addons-106685)     <pae/>
	I0318 12:45:12.246560 1075954 main.go:141] libmachine: (addons-106685)     
	I0318 12:45:12.246573 1075954 main.go:141] libmachine: (addons-106685)   </features>
	I0318 12:45:12.246583 1075954 main.go:141] libmachine: (addons-106685)   <cpu mode='host-passthrough'>
	I0318 12:45:12.246595 1075954 main.go:141] libmachine: (addons-106685)   
	I0318 12:45:12.246607 1075954 main.go:141] libmachine: (addons-106685)   </cpu>
	I0318 12:45:12.246618 1075954 main.go:141] libmachine: (addons-106685)   <os>
	I0318 12:45:12.246630 1075954 main.go:141] libmachine: (addons-106685)     <type>hvm</type>
	I0318 12:45:12.246648 1075954 main.go:141] libmachine: (addons-106685)     <boot dev='cdrom'/>
	I0318 12:45:12.246672 1075954 main.go:141] libmachine: (addons-106685)     <boot dev='hd'/>
	I0318 12:45:12.246685 1075954 main.go:141] libmachine: (addons-106685)     <bootmenu enable='no'/>
	I0318 12:45:12.246699 1075954 main.go:141] libmachine: (addons-106685)   </os>
	I0318 12:45:12.246711 1075954 main.go:141] libmachine: (addons-106685)   <devices>
	I0318 12:45:12.246720 1075954 main.go:141] libmachine: (addons-106685)     <disk type='file' device='cdrom'>
	I0318 12:45:12.246736 1075954 main.go:141] libmachine: (addons-106685)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/boot2docker.iso'/>
	I0318 12:45:12.246747 1075954 main.go:141] libmachine: (addons-106685)       <target dev='hdc' bus='scsi'/>
	I0318 12:45:12.246755 1075954 main.go:141] libmachine: (addons-106685)       <readonly/>
	I0318 12:45:12.246761 1075954 main.go:141] libmachine: (addons-106685)     </disk>
	I0318 12:45:12.246772 1075954 main.go:141] libmachine: (addons-106685)     <disk type='file' device='disk'>
	I0318 12:45:12.246790 1075954 main.go:141] libmachine: (addons-106685)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:45:12.246807 1075954 main.go:141] libmachine: (addons-106685)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/addons-106685.rawdisk'/>
	I0318 12:45:12.246817 1075954 main.go:141] libmachine: (addons-106685)       <target dev='hda' bus='virtio'/>
	I0318 12:45:12.246828 1075954 main.go:141] libmachine: (addons-106685)     </disk>
	I0318 12:45:12.246838 1075954 main.go:141] libmachine: (addons-106685)     <interface type='network'>
	I0318 12:45:12.246847 1075954 main.go:141] libmachine: (addons-106685)       <source network='mk-addons-106685'/>
	I0318 12:45:12.246858 1075954 main.go:141] libmachine: (addons-106685)       <model type='virtio'/>
	I0318 12:45:12.246886 1075954 main.go:141] libmachine: (addons-106685)     </interface>
	I0318 12:45:12.246912 1075954 main.go:141] libmachine: (addons-106685)     <interface type='network'>
	I0318 12:45:12.246937 1075954 main.go:141] libmachine: (addons-106685)       <source network='default'/>
	I0318 12:45:12.246963 1075954 main.go:141] libmachine: (addons-106685)       <model type='virtio'/>
	I0318 12:45:12.246973 1075954 main.go:141] libmachine: (addons-106685)     </interface>
	I0318 12:45:12.246980 1075954 main.go:141] libmachine: (addons-106685)     <serial type='pty'>
	I0318 12:45:12.246989 1075954 main.go:141] libmachine: (addons-106685)       <target port='0'/>
	I0318 12:45:12.246996 1075954 main.go:141] libmachine: (addons-106685)     </serial>
	I0318 12:45:12.247004 1075954 main.go:141] libmachine: (addons-106685)     <console type='pty'>
	I0318 12:45:12.247018 1075954 main.go:141] libmachine: (addons-106685)       <target type='serial' port='0'/>
	I0318 12:45:12.247026 1075954 main.go:141] libmachine: (addons-106685)     </console>
	I0318 12:45:12.247035 1075954 main.go:141] libmachine: (addons-106685)     <rng model='virtio'>
	I0318 12:45:12.247046 1075954 main.go:141] libmachine: (addons-106685)       <backend model='random'>/dev/random</backend>
	I0318 12:45:12.247057 1075954 main.go:141] libmachine: (addons-106685)     </rng>
	I0318 12:45:12.247065 1075954 main.go:141] libmachine: (addons-106685)     
	I0318 12:45:12.247071 1075954 main.go:141] libmachine: (addons-106685)     
	I0318 12:45:12.247079 1075954 main.go:141] libmachine: (addons-106685)   </devices>
	I0318 12:45:12.247089 1075954 main.go:141] libmachine: (addons-106685) </domain>
	I0318 12:45:12.247099 1075954 main.go:141] libmachine: (addons-106685) 
	I0318 12:45:12.251787 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:87:c8:5a in network default
	I0318 12:45:12.252484 1075954 main.go:141] libmachine: (addons-106685) Ensuring networks are active...
	I0318 12:45:12.252507 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:12.253238 1075954 main.go:141] libmachine: (addons-106685) Ensuring network default is active
	I0318 12:45:12.253557 1075954 main.go:141] libmachine: (addons-106685) Ensuring network mk-addons-106685 is active
	I0318 12:45:12.254000 1075954 main.go:141] libmachine: (addons-106685) Getting domain xml...
	I0318 12:45:12.254759 1075954 main.go:141] libmachine: (addons-106685) Creating domain...
	I0318 12:45:13.462813 1075954 main.go:141] libmachine: (addons-106685) Waiting to get IP...
	I0318 12:45:13.463677 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:13.464099 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:13.464121 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:13.464078 1075976 retry.go:31] will retry after 290.892875ms: waiting for machine to come up
	I0318 12:45:13.756719 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:13.757214 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:13.757259 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:13.757156 1075976 retry.go:31] will retry after 352.926024ms: waiting for machine to come up
	I0318 12:45:14.111847 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:14.112276 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:14.112312 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:14.112233 1075976 retry.go:31] will retry after 414.178519ms: waiting for machine to come up
	I0318 12:45:14.527693 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:14.528085 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:14.528117 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:14.528022 1075976 retry.go:31] will retry after 567.10278ms: waiting for machine to come up
	I0318 12:45:15.096787 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:15.097158 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:15.097211 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:15.097100 1075976 retry.go:31] will retry after 566.579197ms: waiting for machine to come up
	I0318 12:45:15.664978 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:15.665384 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:15.665419 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:15.665328 1075976 retry.go:31] will retry after 918.670819ms: waiting for machine to come up
	I0318 12:45:16.586278 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:16.586742 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:16.586772 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:16.586686 1075976 retry.go:31] will retry after 774.966807ms: waiting for machine to come up
	I0318 12:45:17.363763 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:17.364163 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:17.364197 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:17.364114 1075976 retry.go:31] will retry after 1.48184225s: waiting for machine to come up
	I0318 12:45:18.847757 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:18.848261 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:18.848289 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:18.848219 1075976 retry.go:31] will retry after 1.536147853s: waiting for machine to come up
	I0318 12:45:20.385864 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:20.386322 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:20.386352 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:20.386266 1075976 retry.go:31] will retry after 2.056836281s: waiting for machine to come up
	I0318 12:45:22.445269 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:22.445724 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:22.445760 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:22.445676 1075976 retry.go:31] will retry after 2.566944137s: waiting for machine to come up
	I0318 12:45:25.015803 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:25.016350 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:25.016384 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:25.016293 1075976 retry.go:31] will retry after 3.537481726s: waiting for machine to come up
	I0318 12:45:28.556682 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:28.557141 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:28.557170 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:28.557099 1075976 retry.go:31] will retry after 4.234625852s: waiting for machine to come up
	I0318 12:45:32.794340 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:32.794847 1075954 main.go:141] libmachine: (addons-106685) Found IP for machine: 192.168.39.205
	I0318 12:45:32.794902 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has current primary IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:32.794913 1075954 main.go:141] libmachine: (addons-106685) Reserving static IP address...
	I0318 12:45:32.795227 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find host DHCP lease matching {name: "addons-106685", mac: "52:54:00:ae:c4:53", ip: "192.168.39.205"} in network mk-addons-106685
	I0318 12:45:32.872091 1075954 main.go:141] libmachine: (addons-106685) DBG | Getting to WaitForSSH function...
	I0318 12:45:32.872131 1075954 main.go:141] libmachine: (addons-106685) Reserved static IP address: 192.168.39.205
	I0318 12:45:32.872181 1075954 main.go:141] libmachine: (addons-106685) Waiting for SSH to be available...
	I0318 12:45:32.874712 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:32.875065 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685
	I0318 12:45:32.875105 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find defined IP address of network mk-addons-106685 interface with MAC address 52:54:00:ae:c4:53
	I0318 12:45:32.875315 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH client type: external
	I0318 12:45:32.875342 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa (-rw-------)
	I0318 12:45:32.875380 1075954 main.go:141] libmachine: (addons-106685) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:45:32.875405 1075954 main.go:141] libmachine: (addons-106685) DBG | About to run SSH command:
	I0318 12:45:32.875424 1075954 main.go:141] libmachine: (addons-106685) DBG | exit 0
	I0318 12:45:32.879655 1075954 main.go:141] libmachine: (addons-106685) DBG | SSH cmd err, output: exit status 255: 
	I0318 12:45:32.879681 1075954 main.go:141] libmachine: (addons-106685) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 12:45:32.879688 1075954 main.go:141] libmachine: (addons-106685) DBG | command : exit 0
	I0318 12:45:32.879696 1075954 main.go:141] libmachine: (addons-106685) DBG | err     : exit status 255
	I0318 12:45:32.879705 1075954 main.go:141] libmachine: (addons-106685) DBG | output  : 
	I0318 12:45:35.881929 1075954 main.go:141] libmachine: (addons-106685) DBG | Getting to WaitForSSH function...
	I0318 12:45:35.884936 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:35.885448 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:35.885489 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:35.885551 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH client type: external
	I0318 12:45:35.885572 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa (-rw-------)
	I0318 12:45:35.885609 1075954 main.go:141] libmachine: (addons-106685) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:45:35.885655 1075954 main.go:141] libmachine: (addons-106685) DBG | About to run SSH command:
	I0318 12:45:35.885673 1075954 main.go:141] libmachine: (addons-106685) DBG | exit 0
	I0318 12:45:36.012423 1075954 main.go:141] libmachine: (addons-106685) DBG | SSH cmd err, output: <nil>: 
	I0318 12:45:36.012870 1075954 main.go:141] libmachine: (addons-106685) KVM machine creation complete!
	I0318 12:45:36.013332 1075954 main.go:141] libmachine: (addons-106685) Calling .GetConfigRaw
	I0318 12:45:36.068639 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:36.131266 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:36.131489 1075954 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:45:36.131506 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:45:36.133232 1075954 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:45:36.133256 1075954 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:45:36.133263 1075954 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:45:36.133283 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.136162 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.136497 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.136536 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.136664 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.136871 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.137037 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.137180 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.137354 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.137642 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.137661 1075954 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:45:36.243627 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:36.243662 1075954 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:45:36.243671 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.246659 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.247171 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.247207 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.247361 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.247613 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.247822 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.248038 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.248206 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.248388 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.248398 1075954 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:45:36.356991 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:45:36.357080 1075954 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:45:36.357090 1075954 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:45:36.357098 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:36.357441 1075954 buildroot.go:166] provisioning hostname "addons-106685"
	I0318 12:45:36.357479 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:36.357700 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.360332 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.360708 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.360740 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.360860 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.360998 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.361178 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.361289 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.361425 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.361673 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.361692 1075954 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-106685 && echo "addons-106685" | sudo tee /etc/hostname
	I0318 12:45:36.483757 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-106685
	
	I0318 12:45:36.483786 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.486764 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.487132 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.487164 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.487298 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.487544 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.487760 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.487974 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.488246 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.488510 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.488533 1075954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-106685' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-106685/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-106685' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:45:36.605298 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:36.605337 1075954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 12:45:36.605379 1075954 buildroot.go:174] setting up certificates
	I0318 12:45:36.605390 1075954 provision.go:84] configureAuth start
	I0318 12:45:36.605401 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:36.605791 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:36.608648 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.609071 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.609103 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.609254 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.611764 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.612363 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.612399 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.612611 1075954 provision.go:143] copyHostCerts
	I0318 12:45:36.612715 1075954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 12:45:36.612879 1075954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 12:45:36.612998 1075954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 12:45:36.613072 1075954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.addons-106685 san=[127.0.0.1 192.168.39.205 addons-106685 localhost minikube]
	I0318 12:45:36.867664 1075954 provision.go:177] copyRemoteCerts
	I0318 12:45:36.867758 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:45:36.867794 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.870932 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.871239 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.871266 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.871450 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.871710 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.871888 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.872064 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:36.954687 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:45:36.981528 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:45:37.008069 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:45:37.034582 1075954 provision.go:87] duration metric: took 429.176891ms to configureAuth
	I0318 12:45:37.034614 1075954 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:45:37.034784 1075954 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:37.034893 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.037849 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.038212 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.038266 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.038425 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.038654 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.038819 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.038926 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.039096 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:37.039299 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:37.039328 1075954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:45:37.322514 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:45:37.322547 1075954 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:45:37.322559 1075954 main.go:141] libmachine: (addons-106685) Calling .GetURL
	I0318 12:45:37.324094 1075954 main.go:141] libmachine: (addons-106685) DBG | Using libvirt version 6000000
	I0318 12:45:37.326652 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.327104 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.327131 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.327341 1075954 main.go:141] libmachine: Docker is up and running!
	I0318 12:45:37.327357 1075954 main.go:141] libmachine: Reticulating splines...
	I0318 12:45:37.327367 1075954 client.go:171] duration metric: took 26.217545276s to LocalClient.Create
	I0318 12:45:37.327405 1075954 start.go:167] duration metric: took 26.217620004s to libmachine.API.Create "addons-106685"
	I0318 12:45:37.327417 1075954 start.go:293] postStartSetup for "addons-106685" (driver="kvm2")
	I0318 12:45:37.327427 1075954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:45:37.327445 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.327718 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:45:37.327742 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.330171 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.330544 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.330585 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.330734 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.330945 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.331111 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.331256 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:37.415063 1075954 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:45:37.419941 1075954 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:45:37.419973 1075954 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 12:45:37.420073 1075954 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 12:45:37.420104 1075954 start.go:296] duration metric: took 92.681482ms for postStartSetup
	I0318 12:45:37.420148 1075954 main.go:141] libmachine: (addons-106685) Calling .GetConfigRaw
	I0318 12:45:37.420781 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:37.423622 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.424116 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.424150 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.424426 1075954 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/config.json ...
	I0318 12:45:37.424654 1075954 start.go:128] duration metric: took 26.334770448s to createHost
	I0318 12:45:37.424683 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.426995 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.427339 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.427378 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.427468 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.427671 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.427882 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.428024 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.428188 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:37.428412 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:37.428428 1075954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:45:37.537153 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765937.522700288
	
	I0318 12:45:37.537184 1075954 fix.go:216] guest clock: 1710765937.522700288
	I0318 12:45:37.537210 1075954 fix.go:229] Guest: 2024-03-18 12:45:37.522700288 +0000 UTC Remote: 2024-03-18 12:45:37.424668799 +0000 UTC m=+26.459204216 (delta=98.031489ms)
	I0318 12:45:37.537283 1075954 fix.go:200] guest clock delta is within tolerance: 98.031489ms
	I0318 12:45:37.537292 1075954 start.go:83] releasing machines lock for "addons-106685", held for 26.447533925s
	I0318 12:45:37.537322 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.537673 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:37.540401 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.540740 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.540774 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.540943 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.541446 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.541662 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.541773 1075954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:45:37.541844 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.541907 1075954 ssh_runner.go:195] Run: cat /version.json
	I0318 12:45:37.541931 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.544456 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.544745 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.544771 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.544792 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.544954 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.545162 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.545326 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.545347 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.545364 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.545497 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.545584 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:37.545673 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.545817 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.545970 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:37.646848 1075954 ssh_runner.go:195] Run: systemctl --version
	I0318 12:45:37.653170 1075954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:45:37.821719 1075954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:45:37.829581 1075954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:45:37.829665 1075954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:45:37.847432 1075954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:45:37.847476 1075954 start.go:494] detecting cgroup driver to use...
	I0318 12:45:37.847562 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:45:37.870207 1075954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:45:37.885705 1075954 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:45:37.885765 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:45:37.901549 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:45:37.916883 1075954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:45:38.039774 1075954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:45:38.215989 1075954 docker.go:233] disabling docker service ...
	I0318 12:45:38.216093 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:45:38.232133 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:45:38.245407 1075954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:45:38.379181 1075954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:45:38.509113 1075954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:45:38.524157 1075954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:45:38.543882 1075954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:45:38.543961 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.554833 1075954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:45:38.554922 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.565763 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.576544 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.587614 1075954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:45:38.598670 1075954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:45:38.608147 1075954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:45:38.608226 1075954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:45:38.622528 1075954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:45:38.632548 1075954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:38.755882 1075954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:45:38.907853 1075954 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:45:38.907972 1075954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:45:38.913656 1075954 start.go:562] Will wait 60s for crictl version
	I0318 12:45:38.913747 1075954 ssh_runner.go:195] Run: which crictl
	I0318 12:45:38.917820 1075954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:45:38.960367 1075954 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:45:38.960486 1075954 ssh_runner.go:195] Run: crio --version
	I0318 12:45:38.990349 1075954 ssh_runner.go:195] Run: crio --version
	I0318 12:45:39.025630 1075954 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:45:39.027096 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:39.029985 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:39.030296 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:39.030350 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:39.030527 1075954 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:45:39.034931 1075954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:39.047956 1075954 kubeadm.go:877] updating cluster {Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:45:39.048090 1075954 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:45:39.048146 1075954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:45:39.082992 1075954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 12:45:39.083081 1075954 ssh_runner.go:195] Run: which lz4
	I0318 12:45:39.087448 1075954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:45:39.091848 1075954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:45:39.091890 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 12:45:40.783677 1075954 crio.go:444] duration metric: took 1.696280623s to copy over tarball
	I0318 12:45:40.783786 1075954 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:45:43.492467 1075954 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.708639438s)
	I0318 12:45:43.492509 1075954 crio.go:451] duration metric: took 2.708788824s to extract the tarball
	I0318 12:45:43.492521 1075954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:45:43.535449 1075954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:45:43.576356 1075954 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:45:43.576382 1075954 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:45:43.576394 1075954 kubeadm.go:928] updating node { 192.168.39.205 8443 v1.28.4 crio true true} ...
	I0318 12:45:43.576506 1075954 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-106685 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:45:43.576600 1075954 ssh_runner.go:195] Run: crio config
	I0318 12:45:43.628071 1075954 cni.go:84] Creating CNI manager for ""
	I0318 12:45:43.628098 1075954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:43.628112 1075954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:45:43.628139 1075954 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-106685 NodeName:addons-106685 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:45:43.628309 1075954 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-106685"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:45:43.628376 1075954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:45:43.639013 1075954 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:45:43.639104 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:45:43.649537 1075954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 12:45:43.667444 1075954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:45:43.685194 1075954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0318 12:45:43.703472 1075954 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I0318 12:45:43.707976 1075954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:43.721502 1075954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:43.842694 1075954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:45:43.860185 1075954 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685 for IP: 192.168.39.205
	I0318 12:45:43.860216 1075954 certs.go:194] generating shared ca certs ...
	I0318 12:45:43.860236 1075954 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:43.860402 1075954 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 12:45:43.965932 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt ...
	I0318 12:45:43.965968 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt: {Name:mk5f9551de9c497d1c59382d38e79a61c6cfd7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:43.966185 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key ...
	I0318 12:45:43.966201 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key: {Name:mk41a0f707f6782a7d808da53e4fcdabcf550858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:43.966342 1075954 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 12:45:44.030624 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt ...
	I0318 12:45:44.030661 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt: {Name:mkd59dd5caba64aef304a4b13ca0d6338782347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.030827 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key ...
	I0318 12:45:44.030839 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key: {Name:mk911f9d9682c437c92758b0616767e4bda773e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.030910 1075954 certs.go:256] generating profile certs ...
	I0318 12:45:44.030982 1075954 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.key
	I0318 12:45:44.030998 1075954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt with IP's: []
	I0318 12:45:44.350157 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt ...
	I0318 12:45:44.350193 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: {Name:mk0c2e9276cbcab9a530edc0a7cd4eec0d2a232b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.350354 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.key ...
	I0318 12:45:44.350365 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.key: {Name:mk2254574a4a4d0953d51ff29d16fe78ebb8c6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.350435 1075954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff
	I0318 12:45:44.350460 1075954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205]
	I0318 12:45:44.424645 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff ...
	I0318 12:45:44.424679 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff: {Name:mkff85023b3ecfdabda3962ce6116dea82c5da82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.424858 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff ...
	I0318 12:45:44.424872 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff: {Name:mkebd83009dd1139661d27893984c331ba1dfe2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.424948 1075954 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt
	I0318 12:45:44.425022 1075954 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key
	I0318 12:45:44.425066 1075954 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key
	I0318 12:45:44.425085 1075954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt with IP's: []
	I0318 12:45:44.672811 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt ...
	I0318 12:45:44.672846 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt: {Name:mka7be3b25a4b14abe83604a5406042112834714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.673042 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key ...
	I0318 12:45:44.673065 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key: {Name:mke2fdfc89764503c7adabc96fde8b082b491125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.673248 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 12:45:44.673288 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:45:44.673314 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:45:44.673340 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 12:45:44.673966 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:45:44.701672 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:45:44.727786 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:45:44.755257 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:45:44.783127 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 12:45:44.815084 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:45:44.845776 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:45:44.877105 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 12:45:44.909143 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:45:44.941714 1075954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:45:44.961585 1075954 ssh_runner.go:195] Run: openssl version
	I0318 12:45:44.967933 1075954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:45:44.980360 1075954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:44.985579 1075954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:44.985639 1075954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:44.991938 1075954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:45:45.004221 1075954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:45:45.008958 1075954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:45:45.009016 1075954 kubeadm.go:391] StartCluster: {Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:45.009127 1075954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:45:45.009193 1075954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:45:45.050728 1075954 cri.go:89] found id: ""
	I0318 12:45:45.050807 1075954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:45:45.062192 1075954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:45:45.073044 1075954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:45:45.083862 1075954 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:45:45.083885 1075954 kubeadm.go:156] found existing configuration files:
	
	I0318 12:45:45.083941 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:45:45.094595 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:45:45.094667 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:45:45.105062 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:45:45.116425 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:45:45.116510 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:45:45.127846 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:45:45.138231 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:45:45.138304 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:45:45.149189 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:45:45.159556 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:45:45.159622 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:45:45.170708 1075954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:45:45.380937 1075954 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:45:55.021435 1075954 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:45:55.021522 1075954 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:45:55.021602 1075954 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:45:55.021717 1075954 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:45:55.021825 1075954 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:45:55.021931 1075954 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:45:55.023787 1075954 out.go:204]   - Generating certificates and keys ...
	I0318 12:45:55.023899 1075954 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:45:55.023985 1075954 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:45:55.024081 1075954 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:45:55.024203 1075954 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:45:55.024282 1075954 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:45:55.024352 1075954 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:45:55.024438 1075954 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:45:55.024606 1075954 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-106685 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0318 12:45:55.024700 1075954 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:45:55.024886 1075954 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-106685 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0318 12:45:55.024981 1075954 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:45:55.025080 1075954 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:45:55.025152 1075954 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:45:55.025214 1075954 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:45:55.025260 1075954 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:45:55.025308 1075954 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:45:55.025361 1075954 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:45:55.025426 1075954 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:45:55.025515 1075954 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:45:55.025596 1075954 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:45:55.027204 1075954 out.go:204]   - Booting up control plane ...
	I0318 12:45:55.027329 1075954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:45:55.027431 1075954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:45:55.027532 1075954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:45:55.027662 1075954 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:45:55.027793 1075954 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:45:55.027886 1075954 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:45:55.028043 1075954 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:45:55.028148 1075954 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002163 seconds
	I0318 12:45:55.028271 1075954 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:45:55.028419 1075954 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:45:55.028483 1075954 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:45:55.028676 1075954 kubeadm.go:309] [mark-control-plane] Marking the node addons-106685 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:45:55.028758 1075954 kubeadm.go:309] [bootstrap-token] Using token: cv7fgx.9rsgzbp5eibqd9vf
	I0318 12:45:55.030163 1075954 out.go:204]   - Configuring RBAC rules ...
	I0318 12:45:55.030292 1075954 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:45:55.030390 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:45:55.030576 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:45:55.030717 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:45:55.030841 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:45:55.030944 1075954 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:45:55.031045 1075954 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:45:55.031085 1075954 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:45:55.031128 1075954 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:45:55.031134 1075954 kubeadm.go:309] 
	I0318 12:45:55.031186 1075954 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:45:55.031199 1075954 kubeadm.go:309] 
	I0318 12:45:55.031268 1075954 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:45:55.031275 1075954 kubeadm.go:309] 
	I0318 12:45:55.031316 1075954 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:45:55.031403 1075954 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:45:55.031475 1075954 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:45:55.031484 1075954 kubeadm.go:309] 
	I0318 12:45:55.031576 1075954 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:45:55.031596 1075954 kubeadm.go:309] 
	I0318 12:45:55.031675 1075954 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:45:55.031686 1075954 kubeadm.go:309] 
	I0318 12:45:55.031748 1075954 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:45:55.031838 1075954 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:45:55.031954 1075954 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:45:55.031968 1075954 kubeadm.go:309] 
	I0318 12:45:55.032066 1075954 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:45:55.032165 1075954 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:45:55.032175 1075954 kubeadm.go:309] 
	I0318 12:45:55.032288 1075954 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cv7fgx.9rsgzbp5eibqd9vf \
	I0318 12:45:55.032410 1075954 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 12:45:55.032439 1075954 kubeadm.go:309] 	--control-plane 
	I0318 12:45:55.032448 1075954 kubeadm.go:309] 
	I0318 12:45:55.032555 1075954 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:45:55.032563 1075954 kubeadm.go:309] 
	I0318 12:45:55.032629 1075954 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cv7fgx.9rsgzbp5eibqd9vf \
	I0318 12:45:55.032729 1075954 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 12:45:55.032741 1075954 cni.go:84] Creating CNI manager for ""
	I0318 12:45:55.032748 1075954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:55.035083 1075954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 12:45:55.036245 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 12:45:55.103761 1075954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 12:45:55.163387 1075954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:45:55.163458 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:55.163518 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-106685 minikube.k8s.io/updated_at=2024_03_18T12_45_55_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=addons-106685 minikube.k8s.io/primary=true
	I0318 12:45:55.206033 1075954 ops.go:34] apiserver oom_adj: -16
	I0318 12:45:55.298781 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:55.799003 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:56.299315 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:56.799220 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:57.299355 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:57.799725 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:58.298886 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:58.799600 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:59.299091 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:59.799648 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:00.299723 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:00.799476 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:01.299164 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:01.799077 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:02.299358 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:02.798941 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:03.299852 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:03.799861 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:04.299807 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:04.799395 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:05.299430 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:05.799443 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:06.299627 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:06.798985 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:06.897074 1075954 kubeadm.go:1107] duration metric: took 11.73367192s to wait for elevateKubeSystemPrivileges
	W0318 12:46:06.897146 1075954 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:46:06.897159 1075954 kubeadm.go:393] duration metric: took 21.888147741s to StartCluster
	I0318 12:46:06.897185 1075954 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:46:06.897333 1075954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:46:06.897835 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:46:06.898119 1075954 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:46:06.900039 1075954 out.go:177] * Verifying Kubernetes components...
	I0318 12:46:06.898168 1075954 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 12:46:06.898133 1075954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:46:06.898360 1075954 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:46:06.901440 1075954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:46:06.901449 1075954 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-106685"
	I0318 12:46:06.901464 1075954 addons.go:69] Setting metrics-server=true in profile "addons-106685"
	I0318 12:46:06.901469 1075954 addons.go:69] Setting gcp-auth=true in profile "addons-106685"
	I0318 12:46:06.901487 1075954 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-106685"
	I0318 12:46:06.901510 1075954 mustload.go:65] Loading cluster: addons-106685
	I0318 12:46:06.901502 1075954 addons.go:69] Setting storage-provisioner=true in profile "addons-106685"
	I0318 12:46:06.901519 1075954 addons.go:234] Setting addon metrics-server=true in "addons-106685"
	I0318 12:46:06.901526 1075954 addons.go:69] Setting helm-tiller=true in profile "addons-106685"
	I0318 12:46:06.901533 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901442 1075954 addons.go:69] Setting yakd=true in profile "addons-106685"
	I0318 12:46:06.901541 1075954 addons.go:234] Setting addon storage-provisioner=true in "addons-106685"
	I0318 12:46:06.901547 1075954 addons.go:234] Setting addon helm-tiller=true in "addons-106685"
	I0318 12:46:06.901555 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901558 1075954 addons.go:234] Setting addon yakd=true in "addons-106685"
	I0318 12:46:06.901575 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901580 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901581 1075954 addons.go:69] Setting registry=true in profile "addons-106685"
	I0318 12:46:06.901600 1075954 addons.go:234] Setting addon registry=true in "addons-106685"
	I0318 12:46:06.901620 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901735 1075954 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:46:06.901786 1075954 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-106685"
	I0318 12:46:06.901836 1075954 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-106685"
	I0318 12:46:06.901971 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902001 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902054 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902063 1075954 addons.go:69] Setting volumesnapshots=true in profile "addons-106685"
	I0318 12:46:06.902067 1075954 addons.go:69] Setting cloud-spanner=true in profile "addons-106685"
	I0318 12:46:06.902086 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902099 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902099 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902106 1075954 addons.go:234] Setting addon volumesnapshots=true in "addons-106685"
	I0318 12:46:06.902120 1075954 addons.go:234] Setting addon cloud-spanner=true in "addons-106685"
	I0318 12:46:06.902124 1075954 addons.go:69] Setting ingress-dns=true in profile "addons-106685"
	I0318 12:46:06.902139 1075954 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-106685"
	I0318 12:46:06.902161 1075954 addons.go:69] Setting default-storageclass=true in profile "addons-106685"
	I0318 12:46:06.902192 1075954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-106685"
	I0318 12:46:06.902196 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902203 1075954 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-106685"
	I0318 12:46:06.902231 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.901575 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901511 1075954 addons.go:69] Setting ingress=true in profile "addons-106685"
	I0318 12:46:06.902289 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902312 1075954 addons.go:234] Setting addon ingress=true in "addons-106685"
	I0318 12:46:06.902319 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902120 1075954 addons.go:69] Setting inspektor-gadget=true in profile "addons-106685"
	I0318 12:46:06.902391 1075954 addons.go:234] Setting addon inspektor-gadget=true in "addons-106685"
	I0318 12:46:06.902124 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902422 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902467 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902150 1075954 addons.go:234] Setting addon ingress-dns=true in "addons-106685"
	I0318 12:46:06.902540 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902571 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902579 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902144 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902256 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902231 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902794 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902816 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902853 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902803 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902909 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902944 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903018 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903046 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903152 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.903178 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903200 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903274 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903294 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903333 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903372 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.923132 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0318 12:46:06.923152 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 12:46:06.923135 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0318 12:46:06.923140 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0318 12:46:06.923895 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.923944 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.924009 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.924462 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.924472 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.924488 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.924637 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.924660 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.924774 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.924797 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.924983 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.925004 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.925069 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925207 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925271 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925319 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925831 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.925878 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.926467 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.926495 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.926677 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.926718 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.927241 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.927267 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.934199 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0318 12:46:06.934729 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.935594 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.935615 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.936223 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.936553 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.941070 1075954 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-106685"
	I0318 12:46:06.941128 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.941542 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.941584 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.944147 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.944207 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.946315 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I0318 12:46:06.948809 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0318 12:46:06.949245 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.949490 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.949964 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.949984 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.950221 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.950251 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.950633 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.950872 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.950988 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.951478 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.951514 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.954027 1075954 addons.go:234] Setting addon default-storageclass=true in "addons-106685"
	I0318 12:46:06.954079 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.954452 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.954507 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.957150 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0318 12:46:06.957828 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.958562 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.958596 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.959114 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.966371 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0318 12:46:06.966970 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.967092 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.967633 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.967667 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.968158 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.968767 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.968813 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.969053 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0318 12:46:06.969523 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.969766 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.970064 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.970085 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.972253 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:46:06.970711 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.974126 1075954 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:46:06.974142 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:46:06.974168 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.974460 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.975274 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0318 12:46:06.975387 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0318 12:46:06.975728 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.976369 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.976439 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.976451 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.976908 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.976956 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.977412 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.977425 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.977464 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.978125 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0318 12:46:06.978528 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.979056 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.979080 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.979228 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.979240 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.979600 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.979807 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.980006 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.980051 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.980061 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.980714 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:06.980737 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0318 12:46:06.980714 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:06.980777 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.981242 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.981284 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.981502 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:06.981711 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:06.982061 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0318 12:46:06.982241 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:06.982555 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.982579 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.982556 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.984889 1075954 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 12:46:06.984203 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.984206 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.984864 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I0318 12:46:06.986436 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 12:46:06.986491 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.987876 1075954 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 12:46:06.986507 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.987892 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 12:46:06.987916 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.986471 1075954 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 12:46:06.987333 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.989439 1075954 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 12:46:06.988270 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.989454 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 12:46:06.989477 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.988398 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.990237 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
	I0318 12:46:06.990337 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.990357 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.990372 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.990421 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.990829 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.991367 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.991676 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.991693 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.992213 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.992235 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.992623 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.994243 1075954 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 12:46:06.993038 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.994723 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38877
	I0318 12:46:06.995029 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.995707 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 12:46:06.995929 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 12:46:06.995961 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.997451 1075954 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 12:46:06.996540 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.996574 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.997715 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.997718 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.998752 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.998813 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:06.998881 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:06.998903 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.999004 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:06.999006 1075954 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 12:46:06.999068 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 12:46:06.999074 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:06.999083 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.999094 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.998336 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:06.999116 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:06.999231 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:06.999581 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:06.999766 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:06.999938 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.000331 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.000359 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.001289 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.002185 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.002280 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0318 12:46:07.002502 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.002602 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.002641 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.002712 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.002934 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.003114 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.003500 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.003512 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.003617 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.003724 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.003929 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.004090 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.004206 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.004254 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.004242 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.005314 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.005952 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.005971 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.006438 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.007109 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.007149 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.009674 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0318 12:46:07.010235 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.010865 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.010882 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.011317 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.011998 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.012047 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.015106 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0318 12:46:07.015679 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.015940 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I0318 12:46:07.016325 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.016345 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.016756 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.017005 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.019682 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
	I0318 12:46:07.020274 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.020328 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.020937 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.020963 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.021458 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.021517 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0318 12:46:07.021847 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.021870 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.022252 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.022298 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.022395 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.022633 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.023209 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.023915 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.023934 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.024406 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.025230 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.025285 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.025677 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.027971 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 12:46:07.026277 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0318 12:46:07.031203 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 12:46:07.030244 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.030499 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0318 12:46:07.032695 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 12:46:07.034172 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 12:46:07.033674 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.033966 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.035729 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 12:46:07.035802 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.036575 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.037270 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.037331 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 12:46:07.037791 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.038305 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.039391 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 12:46:07.039598 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.040035 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0318 12:46:07.040377 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.040668 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0318 12:46:07.040907 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 12:46:07.042199 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 12:46:07.041363 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.041416 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.042220 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 12:46:07.042451 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.042786 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0318 12:46:07.043344 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.043663 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.043686 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.043753 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.044084 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.044217 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.044227 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.046058 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 12:46:07.044675 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.044836 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.045314 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.045766 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.046433 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.047419 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.047515 1075954 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 12:46:07.047623 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.048630 1075954 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0318 12:46:07.047628 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.048691 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.048711 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 12:46:07.048738 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0318 12:46:07.048911 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.049156 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.050046 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.050072 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.050079 1075954 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 12:46:07.050091 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 12:46:07.050181 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.050948 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.050960 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.050987 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.051033 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0318 12:46:07.051611 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I0318 12:46:07.051667 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.051681 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.053232 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 12:46:07.051995 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.052366 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.053046 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.052330 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.053861 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.055094 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.055103 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:46:07.056626 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:46:07.055116 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.055132 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.055313 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.055606 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.055735 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.056024 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.057612 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.058085 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.060571 1075954 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 12:46:07.058359 1075954 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 12:46:07.060604 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 12:46:07.060625 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.058539 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.058659 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.059349 1075954 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 12:46:07.059373 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.059381 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.059398 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.059498 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.059523 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.060899 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.061027 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.062178 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 12:46:07.062192 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.062386 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.062388 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.062746 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.063523 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 12:46:07.063538 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 12:46:07.063546 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.063554 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 12:46:07.063571 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.063575 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.063708 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.063903 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.064426 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.064445 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.064652 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.064846 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.065031 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.065175 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.066196 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.066678 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.068662 1075954 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 12:46:07.070528 1075954 out.go:177]   - Using image docker.io/busybox:stable
	I0318 12:46:07.067772 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.069055 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.069103 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.069377 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.070018 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.070056 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.070617 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.072102 1075954 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 12:46:07.072120 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 12:46:07.072127 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.072134 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.070633 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.072152 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.070742 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.073889 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 12:46:07.070867 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.070899 1075954 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:46:07.072341 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.075334 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:46:07.075364 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.075425 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 12:46:07.075444 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 12:46:07.075464 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.075557 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.075869 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.076174 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.076201 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.076285 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.076474 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.076678 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.077438 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.077659 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.077853 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.079418 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.079840 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.079871 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.079926 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.080108 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.080317 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.080393 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.080444 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.080501 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.080541 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.080613 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.080743 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.080848 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.080987 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	W0318 12:46:07.083206 1075954 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57896->192.168.39.205:22: read: connection reset by peer
	I0318 12:46:07.083239 1075954 retry.go:31] will retry after 229.118256ms: ssh: handshake failed: read tcp 192.168.39.1:57896->192.168.39.205:22: read: connection reset by peer
	I0318 12:46:07.243888 1075954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:46:07.435554 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 12:46:07.549885 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 12:46:07.572042 1075954 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 12:46:07.572074 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 12:46:07.574032 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 12:46:07.574053 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 12:46:07.575093 1075954 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 12:46:07.575107 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 12:46:07.658566 1075954 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 12:46:07.658597 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 12:46:07.659135 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 12:46:07.703163 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:46:07.727752 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 12:46:07.727784 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 12:46:07.729400 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 12:46:07.742915 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 12:46:07.742957 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 12:46:07.772679 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:46:07.773209 1075954 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 12:46:07.773240 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 12:46:07.829030 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 12:46:07.858865 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 12:46:07.858905 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 12:46:07.868511 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 12:46:07.868548 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 12:46:07.943078 1075954 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 12:46:07.943119 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 12:46:07.956841 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 12:46:07.956883 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 12:46:07.969623 1075954 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 12:46:07.969655 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 12:46:07.985759 1075954 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.084291114s)
	I0318 12:46:07.985962 1075954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:46:08.004731 1075954 node_ready.go:35] waiting up to 6m0s for node "addons-106685" to be "Ready" ...
	I0318 12:46:08.008696 1075954 node_ready.go:49] node "addons-106685" has status "Ready":"True"
	I0318 12:46:08.008735 1075954 node_ready.go:38] duration metric: took 3.970703ms for node "addons-106685" to be "Ready" ...
	I0318 12:46:08.008749 1075954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:08.017461 1075954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:08.084509 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 12:46:08.085821 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 12:46:08.085857 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 12:46:08.092738 1075954 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 12:46:08.092770 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 12:46:08.150853 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 12:46:08.157608 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 12:46:08.157654 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 12:46:08.197687 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 12:46:08.197730 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 12:46:08.208581 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 12:46:08.208630 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 12:46:08.308143 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 12:46:08.308179 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 12:46:08.344572 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 12:46:08.344598 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 12:46:08.446574 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 12:46:08.446607 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 12:46:08.468585 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 12:46:08.468620 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 12:46:08.492557 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 12:46:08.492586 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 12:46:08.716550 1075954 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:46:08.716574 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 12:46:08.761077 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 12:46:08.761110 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 12:46:08.767444 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 12:46:08.939598 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 12:46:08.982206 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:46:09.027486 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 12:46:09.027516 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 12:46:09.252532 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 12:46:09.252561 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 12:46:09.347238 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 12:46:09.347274 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 12:46:09.679245 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 12:46:09.679285 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 12:46:09.767958 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 12:46:09.767998 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 12:46:10.034957 1075954 pod_ready.go:102] pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace has status "Ready":"False"
	I0318 12:46:10.113346 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 12:46:10.113376 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 12:46:10.245194 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 12:46:10.245221 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 12:46:10.438070 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 12:46:10.626249 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 12:46:10.626275 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 12:46:10.979251 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 12:46:10.979282 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 12:46:11.382457 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 12:46:11.525767 1075954 pod_ready.go:92] pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.525803 1075954 pod_ready.go:81] duration metric: took 3.508298536s for pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.525819 1075954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qf446" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.538642 1075954 pod_ready.go:92] pod "coredns-5dd5756b68-qf446" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.538678 1075954 pod_ready.go:81] duration metric: took 12.84949ms for pod "coredns-5dd5756b68-qf446" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.538693 1075954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.560353 1075954 pod_ready.go:92] pod "etcd-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.560390 1075954 pod_ready.go:81] duration metric: took 21.686991ms for pod "etcd-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.560406 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.585311 1075954 pod_ready.go:92] pod "kube-apiserver-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.585346 1075954 pod_ready.go:81] duration metric: took 24.929677ms for pod "kube-apiserver-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.585360 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.629433 1075954 pod_ready.go:92] pod "kube-controller-manager-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.629473 1075954 pod_ready.go:81] duration metric: took 44.101027ms for pod "kube-controller-manager-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.629488 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ll74j" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.921536 1075954 pod_ready.go:92] pod "kube-proxy-ll74j" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.921565 1075954 pod_ready.go:81] duration metric: took 292.067694ms for pod "kube-proxy-ll74j" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.921579 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:12.321727 1075954 pod_ready.go:92] pod "kube-scheduler-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:12.321762 1075954 pod_ready.go:81] duration metric: took 400.174287ms for pod "kube-scheduler-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:12.321774 1075954 pod_ready.go:38] duration metric: took 4.313009788s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:12.321791 1075954 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:46:12.321844 1075954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:46:13.632535 1075954 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 12:46:13.632585 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:13.636480 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:13.637101 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:13.637131 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:13.637307 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:13.637542 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:13.637744 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:13.637887 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:14.143384 1075954 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 12:46:14.445884 1075954 addons.go:234] Setting addon gcp-auth=true in "addons-106685"
	I0318 12:46:14.445963 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:14.446307 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:14.446340 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:14.464016 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0318 12:46:14.464659 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:14.465247 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:14.465275 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:14.465672 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:14.466185 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:14.466215 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:14.482816 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44055
	I0318 12:46:14.483360 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:14.483892 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:14.483915 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:14.484304 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:14.484523 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:14.486284 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:14.486574 1075954 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 12:46:14.486607 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:14.489668 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:14.490182 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:14.490216 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:14.490531 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:14.490720 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:14.490927 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:14.491117 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:18.379950 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.830012297s)
	I0318 12:46:18.380022 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380038 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380055 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.720888293s)
	I0318 12:46:18.380093 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.944497327s)
	I0318 12:46:18.380105 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380197 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380210 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.650787889s)
	I0318 12:46:18.380172 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.676973345s)
	I0318 12:46:18.380258 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380268 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380291 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.607567859s)
	I0318 12:46:18.380320 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380332 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380423 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.551359195s)
	I0318 12:46:18.380175 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380464 1075954 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.394480882s)
	I0318 12:46:18.380480 1075954 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 12:46:18.380510 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380522 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380523 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380525 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.380537 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380551 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380553 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380559 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380561 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380568 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380576 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380596 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.296040897s)
	I0318 12:46:18.380531 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380621 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380625 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380636 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380690 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.229803061s)
	I0318 12:46:18.380715 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380726 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380770 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.61328885s)
	I0318 12:46:18.380790 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.380796 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380814 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380827 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.380847 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380854 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380862 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380869 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380928 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.441296322s)
	I0318 12:46:18.380949 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380959 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380976 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.381000 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.381088 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.398845746s)
	W0318 12:46:18.381129 1075954 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 12:46:18.381155 1075954 retry.go:31] will retry after 229.259896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 12:46:18.381228 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.943125188s)
	I0318 12:46:18.381246 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.381256 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.381328 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.381353 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.381360 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380485 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380239 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.381822 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380529 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.383276 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.383354 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.383380 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.383398 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.383415 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.383889 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.383953 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.383972 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384190 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384249 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.384267 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384284 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384311 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.384602 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384613 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384690 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.384708 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384717 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384743 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384764 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384781 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.384784 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.384803 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384817 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384828 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380440 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384941 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.384995 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384696 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385055 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.385104 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385125 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.385145 1075954 addons.go:470] Verifying addon registry=true in "addons-106685"
	I0318 12:46:18.388280 1075954 out.go:177] * Verifying registry addon...
	I0318 12:46:18.384767 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385333 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385378 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385398 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385405 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385425 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385442 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385475 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385506 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.387742 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.387764 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.387793 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.387913 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.389589 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.390557 1075954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 12:46:18.390836 1075954 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-106685 service yakd-dashboard -n yakd-dashboard
	
	I0318 12:46:18.390858 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.390862 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392012 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392025 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.390866 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.390878 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392116 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392128 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.390882 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392156 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392165 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.390897 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392093 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392251 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.392605 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392615 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392623 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392636 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392636 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392640 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392651 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392651 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392664 1075954 addons.go:470] Verifying addon metrics-server=true in "addons-106685"
	I0318 12:46:18.392676 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392683 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392692 1075954 addons.go:470] Verifying addon ingress=true in "addons-106685"
	I0318 12:46:18.392699 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392709 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.394189 1075954 out.go:177] * Verifying ingress addon...
	I0318 12:46:18.396325 1075954 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 12:46:18.443375 1075954 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 12:46:18.443408 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:18.457772 1075954 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 12:46:18.457802 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:18.502150 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.502172 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.502585 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.502635 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.502643 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	W0318 12:46:18.502767 1075954 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0318 12:46:18.519285 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.519321 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.519782 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.519819 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.519844 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.610652 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:46:18.884782 1075954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-106685" context rescaled to 1 replicas
	I0318 12:46:18.897776 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:18.901295 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:19.687469 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:19.694178 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:19.747528 1075954 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.425658268s)
	I0318 12:46:19.747580 1075954 api_server.go:72] duration metric: took 12.849419266s to wait for apiserver process to appear ...
	I0318 12:46:19.747588 1075954 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:46:19.747618 1075954 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0318 12:46:19.747616 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.365090365s)
	I0318 12:46:19.747645 1075954 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.261043009s)
	I0318 12:46:19.747669 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:19.747686 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:19.749794 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:46:19.748083 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:19.748142 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:19.751312 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:19.752782 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 12:46:19.751343 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:19.754394 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:19.754457 1075954 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 12:46:19.754479 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0318 12:46:19.754741 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:19.754780 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:19.754786 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:19.754797 1075954 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-106685"
	I0318 12:46:19.756579 1075954 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 12:46:19.759287 1075954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 12:46:19.819538 1075954 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 12:46:19.819572 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:19.823159 1075954 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0318 12:46:19.835733 1075954 api_server.go:141] control plane version: v1.28.4
	I0318 12:46:19.835773 1075954 api_server.go:131] duration metric: took 88.177164ms to wait for apiserver health ...
	I0318 12:46:19.835782 1075954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:46:19.878774 1075954 system_pods.go:59] 19 kube-system pods found
	I0318 12:46:19.878815 1075954 system_pods.go:61] "coredns-5dd5756b68-fgjhz" [d2fa8bcb-a39b-4837-b965-4cbf558cf890] Running
	I0318 12:46:19.878822 1075954 system_pods.go:61] "coredns-5dd5756b68-qf446" [79feb7b9-b1c9-42a6-adbb-324e45aa35ec] Running
	I0318 12:46:19.878831 1075954 system_pods.go:61] "csi-hostpath-attacher-0" [8500ab8e-1f4b-4d6c-8ea7-183a45765ccd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 12:46:19.878837 1075954 system_pods.go:61] "csi-hostpath-resizer-0" [0a1779b3-86ae-429b-9ea1-11ea3b7dd11f] Pending
	I0318 12:46:19.878846 1075954 system_pods.go:61] "csi-hostpathplugin-tdddd" [683115f5-0641-4123-81af-970fe5185bbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 12:46:19.878852 1075954 system_pods.go:61] "etcd-addons-106685" [eae56be1-5d4d-470f-b176-c6382f70f80d] Running
	I0318 12:46:19.878858 1075954 system_pods.go:61] "kube-apiserver-addons-106685" [3f02b47b-2644-4acd-a455-71779192f951] Running
	I0318 12:46:19.878862 1075954 system_pods.go:61] "kube-controller-manager-addons-106685" [8ea59361-4e78-4978-b47a-cf380d4098c7] Running
	I0318 12:46:19.878870 1075954 system_pods.go:61] "kube-ingress-dns-minikube" [b2c4ec5a-1796-470c-b324-7c018ab2799d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 12:46:19.878875 1075954 system_pods.go:61] "kube-proxy-ll74j" [5d5816ef-f9cb-492d-a933-16308c544452] Running
	I0318 12:46:19.878882 1075954 system_pods.go:61] "kube-scheduler-addons-106685" [996af90e-7a6e-4814-ba1a-55cabcc82da0] Running
	I0318 12:46:19.878891 1075954 system_pods.go:61] "metrics-server-69cf46c98-b9sd4" [ef2ad747-2bac-41dc-9aa5-96fa6e675413] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 12:46:19.878903 1075954 system_pods.go:61] "nvidia-device-plugin-daemonset-rgg96" [375e6fa2-ca11-40df-b093-1c93e6401092] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0318 12:46:19.878913 1075954 system_pods.go:61] "registry-proxy-j97lj" [8ea57f10-a30d-4291-9636-1e99d163e226] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 12:46:19.878923 1075954 system_pods.go:61] "registry-vw2h8" [de58d932-6f78-479f-9d49-55619fa3881a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 12:46:19.878933 1075954 system_pods.go:61] "snapshot-controller-58dbcc7b99-2gcn9" [1095c47c-fd36-43a0-94f7-f1aae5fe1090] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.878949 1075954 system_pods.go:61] "snapshot-controller-58dbcc7b99-5vtqp" [eaf7d472-3dbe-449f-b580-e851b86a5850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.878955 1075954 system_pods.go:61] "storage-provisioner" [08aa63b8-ea35-4443-b7d6-fd52b4de2b95] Running
	I0318 12:46:19.878964 1075954 system_pods.go:61] "tiller-deploy-7b677967b9-599zv" [bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 12:46:19.878974 1075954 system_pods.go:74] duration metric: took 43.18332ms to wait for pod list to return data ...
	I0318 12:46:19.878987 1075954 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:46:19.913404 1075954 default_sa.go:45] found service account: "default"
	I0318 12:46:19.913445 1075954 default_sa.go:55] duration metric: took 34.442177ms for default service account to be created ...
	I0318 12:46:19.913459 1075954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:46:19.934364 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:19.938716 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:19.959882 1075954 system_pods.go:86] 19 kube-system pods found
	I0318 12:46:19.959920 1075954 system_pods.go:89] "coredns-5dd5756b68-fgjhz" [d2fa8bcb-a39b-4837-b965-4cbf558cf890] Running
	I0318 12:46:19.959926 1075954 system_pods.go:89] "coredns-5dd5756b68-qf446" [79feb7b9-b1c9-42a6-adbb-324e45aa35ec] Running
	I0318 12:46:19.959934 1075954 system_pods.go:89] "csi-hostpath-attacher-0" [8500ab8e-1f4b-4d6c-8ea7-183a45765ccd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 12:46:19.959943 1075954 system_pods.go:89] "csi-hostpath-resizer-0" [0a1779b3-86ae-429b-9ea1-11ea3b7dd11f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 12:46:19.959952 1075954 system_pods.go:89] "csi-hostpathplugin-tdddd" [683115f5-0641-4123-81af-970fe5185bbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 12:46:19.959958 1075954 system_pods.go:89] "etcd-addons-106685" [eae56be1-5d4d-470f-b176-c6382f70f80d] Running
	I0318 12:46:19.959963 1075954 system_pods.go:89] "kube-apiserver-addons-106685" [3f02b47b-2644-4acd-a455-71779192f951] Running
	I0318 12:46:19.959968 1075954 system_pods.go:89] "kube-controller-manager-addons-106685" [8ea59361-4e78-4978-b47a-cf380d4098c7] Running
	I0318 12:46:19.959974 1075954 system_pods.go:89] "kube-ingress-dns-minikube" [b2c4ec5a-1796-470c-b324-7c018ab2799d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 12:46:19.959979 1075954 system_pods.go:89] "kube-proxy-ll74j" [5d5816ef-f9cb-492d-a933-16308c544452] Running
	I0318 12:46:19.959984 1075954 system_pods.go:89] "kube-scheduler-addons-106685" [996af90e-7a6e-4814-ba1a-55cabcc82da0] Running
	I0318 12:46:19.959993 1075954 system_pods.go:89] "metrics-server-69cf46c98-b9sd4" [ef2ad747-2bac-41dc-9aa5-96fa6e675413] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 12:46:19.960000 1075954 system_pods.go:89] "nvidia-device-plugin-daemonset-rgg96" [375e6fa2-ca11-40df-b093-1c93e6401092] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0318 12:46:19.960008 1075954 system_pods.go:89] "registry-proxy-j97lj" [8ea57f10-a30d-4291-9636-1e99d163e226] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 12:46:19.960015 1075954 system_pods.go:89] "registry-vw2h8" [de58d932-6f78-479f-9d49-55619fa3881a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 12:46:19.960024 1075954 system_pods.go:89] "snapshot-controller-58dbcc7b99-2gcn9" [1095c47c-fd36-43a0-94f7-f1aae5fe1090] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.960034 1075954 system_pods.go:89] "snapshot-controller-58dbcc7b99-5vtqp" [eaf7d472-3dbe-449f-b580-e851b86a5850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.960038 1075954 system_pods.go:89] "storage-provisioner" [08aa63b8-ea35-4443-b7d6-fd52b4de2b95] Running
	I0318 12:46:19.960045 1075954 system_pods.go:89] "tiller-deploy-7b677967b9-599zv" [bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 12:46:19.960053 1075954 system_pods.go:126] duration metric: took 46.586843ms to wait for k8s-apps to be running ...
	I0318 12:46:19.960063 1075954 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:46:19.960117 1075954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:46:19.961095 1075954 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 12:46:19.961119 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 12:46:20.110368 1075954 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 12:46:20.110404 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 12:46:20.280613 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:20.300520 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 12:46:20.408356 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:20.408411 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:20.769296 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:20.896953 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:20.901108 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:21.272778 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:21.396453 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:21.402515 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:21.657571 1075954 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.697419004s)
	I0318 12:46:21.657630 1075954 system_svc.go:56] duration metric: took 1.697561258s WaitForService to wait for kubelet
	I0318 12:46:21.657644 1075954 kubeadm.go:576] duration metric: took 14.759483192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:46:21.657684 1075954 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:46:21.657571 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.046856093s)
	I0318 12:46:21.657751 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:21.657771 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:21.658211 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:21.658231 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:21.658242 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:21.658251 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:21.658554 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:21.658586 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:21.661523 1075954 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:21.661547 1075954 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:21.661557 1075954 node_conditions.go:105] duration metric: took 3.866392ms to run NodePressure ...
	I0318 12:46:21.661570 1075954 start.go:240] waiting for startup goroutines ...
	I0318 12:46:21.765868 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:21.923647 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:21.924298 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:22.223330 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.922757305s)
	I0318 12:46:22.223407 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:22.223426 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:22.223767 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:22.223791 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:22.223803 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:22.223816 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:22.224358 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:22.224429 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:22.224456 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:22.226126 1075954 addons.go:470] Verifying addon gcp-auth=true in "addons-106685"
	I0318 12:46:22.227789 1075954 out.go:177] * Verifying gcp-auth addon...
	I0318 12:46:22.229680 1075954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 12:46:22.251029 1075954 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 12:46:22.251056 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:22.291112 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:22.397387 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:22.402173 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:22.733675 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:22.766918 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:22.897535 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:22.903300 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:23.234603 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:23.265740 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:23.396388 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:23.399920 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:23.734510 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:23.766354 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:23.897511 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:23.901469 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:24.233783 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:24.266455 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:24.400102 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:24.406598 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:24.734194 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:24.765642 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:25.149129 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:25.152770 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:25.234683 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:25.265489 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:25.406624 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:25.410916 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:25.734255 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:25.768671 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:25.896766 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:25.900830 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:26.234037 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:26.266985 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:26.396586 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:26.400823 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:26.738259 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:26.766162 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:26.896433 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:26.900674 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:27.235538 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:27.265696 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:27.395913 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:27.400548 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:27.734360 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:27.765888 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:27.898798 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:27.900728 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:28.237197 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:28.265749 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:28.397929 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:28.400947 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:28.734621 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:28.765218 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:28.895970 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:28.900946 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:29.234437 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:29.266441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:29.396428 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:29.400645 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:29.733980 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:29.767713 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:29.896255 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:29.900269 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:30.233951 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:30.265110 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:30.398717 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:30.401856 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:30.735097 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:30.765672 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:30.895944 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:30.900482 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:31.234506 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:31.266477 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:31.396764 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:31.400493 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:31.733833 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:31.765340 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:31.896259 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:31.900197 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:32.233893 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:32.267250 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:32.396277 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:32.400542 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:32.734510 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:32.765912 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:32.896061 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:32.899988 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:33.434272 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:33.437913 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:33.440588 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:33.442078 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:33.733252 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:33.766013 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:33.895936 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:33.899901 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:34.234441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:34.266620 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:34.396926 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:34.400709 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:34.733748 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:34.769393 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:34.899723 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:34.906693 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:35.234351 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:35.266659 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:35.396166 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:35.400730 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:35.734326 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:35.766432 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:35.896866 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:35.902674 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:36.235914 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:36.265887 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:36.406281 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:36.406336 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:36.735090 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:36.767306 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:36.896404 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:36.901347 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:37.234345 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:37.267207 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:37.396601 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:37.400698 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:37.734272 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:37.766347 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:37.905377 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:37.915429 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:38.234006 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:38.265262 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:38.396609 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:38.401505 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:38.733806 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:38.765335 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:38.896351 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:38.900423 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:39.234569 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:39.266592 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:39.395822 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:39.400084 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:39.733806 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:39.765849 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:39.895908 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:39.900315 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:40.234096 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:40.267143 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:40.399422 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:40.403279 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:40.733430 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:40.768449 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:40.896919 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:40.900159 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:41.234405 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:41.268875 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:41.396581 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:41.400324 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:41.735608 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:41.768957 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:41.896730 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:41.902078 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:42.235048 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:42.265938 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:42.397297 1075954 kapi.go:107] duration metric: took 24.006738369s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 12:46:42.401026 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:42.735385 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:42.772691 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:42.901268 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:43.234535 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:43.266711 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:43.401386 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:43.734103 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:43.767479 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:43.901912 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:44.234383 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:44.271009 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:44.401600 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:44.734186 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:44.766086 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:44.901535 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:45.234993 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:45.264979 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:45.401718 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:45.734382 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:45.766649 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:45.902136 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:46.234292 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:46.265936 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:46.402017 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:46.734843 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:46.766327 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:46.901278 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:47.234780 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:47.265742 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:47.401805 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:47.734064 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:47.765851 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:47.901844 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:48.235009 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:48.269216 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:48.401460 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:48.734563 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:48.768515 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:48.901178 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:49.235041 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:49.273374 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:49.402102 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:49.807307 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:49.809282 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:49.902669 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:50.234466 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:50.266941 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:50.401774 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:50.734374 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:50.766305 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:50.902393 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:51.234752 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:51.266106 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:51.400932 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:51.734756 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:51.765617 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:51.901116 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:52.234557 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:52.266799 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:52.404203 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:52.734284 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:52.768764 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:52.901685 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:53.233866 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:53.265802 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:53.401977 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:53.734584 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:53.766100 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:53.901876 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:54.234758 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:54.265214 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:54.403662 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:54.735154 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:54.765961 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:54.906905 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:55.234653 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:55.266082 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:55.401730 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:55.737005 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:56.050924 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:56.055919 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:56.235286 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:56.266413 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:56.404431 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:56.733888 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:56.766837 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:56.902085 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:57.234694 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:57.268277 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:57.402084 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:57.734915 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:57.766708 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:57.903553 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:58.236691 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:58.264921 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:58.402421 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:58.736987 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:58.806466 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:58.906543 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:59.239025 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:59.265666 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:59.401696 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:59.734354 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:59.766190 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:59.905170 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:00.234149 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:00.265755 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:00.401795 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:00.733867 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:00.766396 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:00.900975 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:01.234892 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:01.265415 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:01.401297 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:01.733618 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:01.766363 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:01.901204 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:02.576369 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:02.577091 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:02.577512 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:02.734599 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:02.766568 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:02.901423 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:03.235787 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:03.265296 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:03.402031 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:03.736456 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:03.766802 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:03.901985 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:04.234786 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:04.273034 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:04.402530 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:04.734242 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:04.769605 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:04.902175 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:05.234217 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:05.266002 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:05.402103 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:05.735174 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:05.767126 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:05.903135 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:06.235521 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:06.265978 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:06.402417 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:06.739441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:06.766139 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:06.901015 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:07.233769 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:07.266710 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:07.401582 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:07.734219 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:07.765472 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:07.901622 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:08.234023 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:08.266081 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:08.401398 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:08.733600 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:08.765001 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:08.901866 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:09.236997 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:09.266119 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:09.404625 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:09.733890 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:09.765206 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:09.901487 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:10.242534 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:10.267010 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:10.402106 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:10.733984 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:10.771589 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:10.904171 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:11.235645 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:11.267365 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:11.404794 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:11.736477 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:11.765394 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:11.901141 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:12.234822 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:12.265926 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:12.402358 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:12.733722 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:12.765246 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:12.900916 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:13.235312 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:13.266224 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:13.404218 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:13.760490 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:13.771444 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:13.901956 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:14.234358 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:14.266946 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:14.402000 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:14.733864 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:14.765456 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:14.902644 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:15.233797 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:15.265220 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:15.404240 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:15.734353 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:15.766090 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:15.902343 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:16.234450 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:16.265797 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:16.401966 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:16.736108 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:16.768701 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:17.304540 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:17.305361 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:17.309185 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:17.402157 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:17.735010 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:17.767041 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:17.903823 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:18.235744 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:18.266072 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:18.401693 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:18.739766 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:18.766351 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:18.905363 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:19.235384 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:19.266127 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:19.401465 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:19.734055 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:19.773981 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:19.901197 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:20.233723 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:20.266135 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:20.401823 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:20.735447 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:20.766501 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:20.905295 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:21.234542 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:21.273457 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:21.402519 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:21.737296 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:21.770563 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:21.903808 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:22.254718 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:22.305031 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:22.413284 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:22.736981 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:22.765244 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:22.903431 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:23.233926 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:23.266132 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:23.401704 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:23.734491 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:23.767077 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:23.901335 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:24.234767 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:24.269264 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:24.402275 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:24.733694 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:24.765057 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:24.903525 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:25.313372 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:25.314424 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:25.532691 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:25.781388 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:25.785161 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:25.902023 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:26.236186 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:26.267190 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:26.402625 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:26.742011 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:26.767792 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:26.902496 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:27.234018 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:27.265754 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:27.401966 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:27.734496 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:27.765844 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:27.901785 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:28.233717 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:28.265614 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:28.401922 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:28.734604 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:28.766270 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:28.901251 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:29.233662 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:29.264770 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:29.402288 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:30.005439 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:30.005686 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:30.009680 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:30.233774 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:30.265690 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:30.400955 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:30.734488 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:30.765301 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:30.901141 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:31.233958 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:31.265383 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:31.401732 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:31.744766 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:31.784385 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:31.918035 1075954 kapi.go:107] duration metric: took 1m13.521704474s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 12:47:32.236327 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:32.266762 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:32.737759 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:32.768145 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:33.256684 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:33.270676 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:33.735428 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:33.765916 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:34.234171 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:34.266607 1075954 kapi.go:107] duration metric: took 1m14.507317762s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 12:47:34.733441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:35.233892 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:35.735098 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:36.233814 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:36.735323 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:37.236804 1075954 kapi.go:107] duration metric: took 1m15.007115484s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 12:47:37.239112 1075954 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-106685 cluster.
	I0318 12:47:37.240789 1075954 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 12:47:37.242383 1075954 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 12:47:37.243991 1075954 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, yakd, inspektor-gadget, nvidia-device-plugin, metrics-server, helm-tiller, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0318 12:47:37.245489 1075954 addons.go:505] duration metric: took 1m30.347320584s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns yakd inspektor-gadget nvidia-device-plugin metrics-server helm-tiller default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0318 12:47:37.245540 1075954 start.go:245] waiting for cluster config update ...
	I0318 12:47:37.245574 1075954 start.go:254] writing updated cluster config ...
	I0318 12:47:37.245895 1075954 ssh_runner.go:195] Run: rm -f paused
	I0318 12:47:37.304028 1075954 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:47:37.306234 1075954 out.go:177] * Done! kubectl is now configured to use "addons-106685" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.253535743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766235253452085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf7a55f7-d548-40fa-9cc3-ace858ce19d1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.254427855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb240d57-b4d0-4f30-9835-2685bba44d44 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.254594466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb240d57-b4d0-4f30-9835-2685bba44d44 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.255567086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48077259d58314c16dd37135ed9fed58fc23bd3f6cc5356995a83566796f15c0,PodSandboxId:33923d581c80ed32c00b0d814a480f23ed9979ed4fbaa332cd308787f3e2ef85,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710766226839136999,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-8kk78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e70aca7-4332-4632-a922-f5b6658bde40,},Annotations:map[string]string{io.kubernetes.container.hash: ea6de7b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce88d77959fc5dcb604456c4dcb5d83c5a65f32aa4148ad5667bb47b7f6d5e,PodSandboxId:9f05efce9813faf5e66cefc62697b23fd6de0ef2ffef9dbcccb6f88070092861,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710766096835807858,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-pgf48,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 12add79b-561a-4482-8417-e1c41272f80c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9977188f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a65b94e17d705079587cf8a716d73a6a313abeb8c5fb61a67baf652a709d94a,PodSandboxId:e95447205e0c5b5241301be3a71358989e24e468a6351e6caaa6de84b3293089,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710766085848996884,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 86b2012e-e452-410b-808c-3fc378157346,},Annotations:map[string]string{io.kubernetes.container.hash: a24f2297,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b62bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337
bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pa
th-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae4
0aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613
270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908
222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d914571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779
a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f
b240d57-b4d0-4f30-9835-2685bba44d44 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.306361136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0257f854-1052-4dbb-b2a7-9d72eb7e6243 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.306447524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0257f854-1052-4dbb-b2a7-9d72eb7e6243 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.307883217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55780157-7fa4-4beb-a554-4a165be136c4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.309348396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766235309319040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55780157-7fa4-4beb-a554-4a165be136c4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.310103222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d65e55d8-7b2e-43e4-988a-c96c47b22389 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.310158710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d65e55d8-7b2e-43e4-988a-c96c47b22389 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.310648740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48077259d58314c16dd37135ed9fed58fc23bd3f6cc5356995a83566796f15c0,PodSandboxId:33923d581c80ed32c00b0d814a480f23ed9979ed4fbaa332cd308787f3e2ef85,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710766226839136999,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-8kk78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e70aca7-4332-4632-a922-f5b6658bde40,},Annotations:map[string]string{io.kubernetes.container.hash: ea6de7b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce88d77959fc5dcb604456c4dcb5d83c5a65f32aa4148ad5667bb47b7f6d5e,PodSandboxId:9f05efce9813faf5e66cefc62697b23fd6de0ef2ffef9dbcccb6f88070092861,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710766096835807858,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-pgf48,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 12add79b-561a-4482-8417-e1c41272f80c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9977188f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a65b94e17d705079587cf8a716d73a6a313abeb8c5fb61a67baf652a709d94a,PodSandboxId:e95447205e0c5b5241301be3a71358989e24e468a6351e6caaa6de84b3293089,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710766085848996884,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 86b2012e-e452-410b-808c-3fc378157346,},Annotations:map[string]string{io.kubernetes.container.hash: a24f2297,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b62bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337
bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pa
th-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae4
0aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613
270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908
222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d914571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779
a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d
65e55d8-7b2e-43e4-988a-c96c47b22389 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.346452092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbd18694-46e1-4946-8022-95579e48d03c name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.346632775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbd18694-46e1-4946-8022-95579e48d03c name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.348023693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f38e9e8-c377-41d9-ad4c-e60df489377b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.349290980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766235349243422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f38e9e8-c377-41d9-ad4c-e60df489377b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.350089511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db119b14-81ad-4c98-bebb-8e72093074e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.350355382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db119b14-81ad-4c98-bebb-8e72093074e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.351326978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48077259d58314c16dd37135ed9fed58fc23bd3f6cc5356995a83566796f15c0,PodSandboxId:33923d581c80ed32c00b0d814a480f23ed9979ed4fbaa332cd308787f3e2ef85,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710766226839136999,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-8kk78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e70aca7-4332-4632-a922-f5b6658bde40,},Annotations:map[string]string{io.kubernetes.container.hash: ea6de7b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce88d77959fc5dcb604456c4dcb5d83c5a65f32aa4148ad5667bb47b7f6d5e,PodSandboxId:9f05efce9813faf5e66cefc62697b23fd6de0ef2ffef9dbcccb6f88070092861,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710766096835807858,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-pgf48,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 12add79b-561a-4482-8417-e1c41272f80c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9977188f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a65b94e17d705079587cf8a716d73a6a313abeb8c5fb61a67baf652a709d94a,PodSandboxId:e95447205e0c5b5241301be3a71358989e24e468a6351e6caaa6de84b3293089,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710766085848996884,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 86b2012e-e452-410b-808c-3fc378157346,},Annotations:map[string]string{io.kubernetes.container.hash: a24f2297,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b62bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337
bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pa
th-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae4
0aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613
270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908
222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d914571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779
a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d
b119b14-81ad-4c98-bebb-8e72093074e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.399378922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffae31d9-e10f-41f8-a3d7-7e03f9e14770 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.399456577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffae31d9-e10f-41f8-a3d7-7e03f9e14770 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.401042668Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=240c0ff6-efbe-4b06-95ad-a3b087aaac37 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.402227643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766235402199288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=240c0ff6-efbe-4b06-95ad-a3b087aaac37 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.403327663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00ec5616-9a09-430e-8dfa-f269bfd23cd7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.403384315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00ec5616-9a09-430e-8dfa-f269bfd23cd7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:35 addons-106685 crio[673]: time="2024-03-18 12:50:35.403976328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48077259d58314c16dd37135ed9fed58fc23bd3f6cc5356995a83566796f15c0,PodSandboxId:33923d581c80ed32c00b0d814a480f23ed9979ed4fbaa332cd308787f3e2ef85,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710766226839136999,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-8kk78,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e70aca7-4332-4632-a922-f5b6658bde40,},Annotations:map[string]string{io.kubernetes.container.hash: ea6de7b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce88d77959fc5dcb604456c4dcb5d83c5a65f32aa4148ad5667bb47b7f6d5e,PodSandboxId:9f05efce9813faf5e66cefc62697b23fd6de0ef2ffef9dbcccb6f88070092861,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710766096835807858,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-pgf48,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 12add79b-561a-4482-8417-e1c41272f80c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9977188f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a65b94e17d705079587cf8a716d73a6a313abeb8c5fb61a67baf652a709d94a,PodSandboxId:e95447205e0c5b5241301be3a71358989e24e468a6351e6caaa6de84b3293089,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710766085848996884,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 86b2012e-e452-410b-808c-3fc378157346,},Annotations:map[string]string{io.kubernetes.container.hash: a24f2297,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b62bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337
bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pa
th-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae4
0aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613
270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908
222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d914571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779
a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0
0ec5616-9a09-430e-8dfa-f269bfd23cd7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	48077259d5831       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   33923d581c80e       hello-world-app-5d77478584-8kk78
	60ce88d77959f       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   9f05efce9813f       headlamp-5485c556b-pgf48
	4a65b94e17d70       docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba                              2 minutes ago       Running             nginx                     0                   e95447205e0c5       nginx
	263cc073a7ada       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   1f955c546533c       gcp-auth-7d69788767-v52wf
	2a7e07706b051       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   5088889c909b8       ingress-nginx-admission-patch-q9qrg
	091499b134c6c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   e17d32473bc90       ingress-nginx-admission-create-z2nvx
	1aeaee22ccbbc       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   adea854b67ff1       yakd-dashboard-9947fc6bf-2l56b
	427429c96eb55       registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca        3 minutes ago       Running             metrics-server            0                   f21414a2cb91f       metrics-server-69cf46c98-b9sd4
	43e5f4c0e544a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   951e107352e5f       local-path-provisioner-78b46b4d5c-q66bb
	7afc9eafec80d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   32799164389bb       storage-provisioner
	cde2882b0d6b4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   0c5d389f84cb0       coredns-5dd5756b68-qf446
	7641c0665ab0b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   1ccbe7d02bded       kube-proxy-ll74j
	89ba208b8a7ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   3284869491f7b       etcd-addons-106685
	43b114c0ac0bb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   bb3ab6d94fd81       kube-scheduler-addons-106685
	2ea1767b3abd6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   31ccb175c26c3       kube-controller-manager-addons-106685
	cefbcb5554340       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   9103b05230128       kube-apiserver-addons-106685
	
	
	==> coredns [cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43648 - 65501 "HINFO IN 1605989327845107632.4539336522483561825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024121704s
	[INFO] 10.244.0.22:50704 - 65020 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000386299s
	[INFO] 10.244.0.22:50723 - 59421 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143048s
	[INFO] 10.244.0.22:59820 - 30130 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118336s
	[INFO] 10.244.0.22:35897 - 6736 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000083634s
	[INFO] 10.244.0.22:57164 - 9487 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077682s
	[INFO] 10.244.0.22:38010 - 62495 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081567s
	[INFO] 10.244.0.22:49102 - 23464 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000989906s
	[INFO] 10.244.0.22:34634 - 60861 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000909565s
	[INFO] 10.244.0.26:46975 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000686507s
	[INFO] 10.244.0.26:36447 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106237s
	
	
	==> describe nodes <==
	Name:               addons-106685
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-106685
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=addons-106685
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_45_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-106685
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:45:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-106685
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:50:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:48:28 +0000   Mon, 18 Mar 2024 12:45:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:48:28 +0000   Mon, 18 Mar 2024 12:45:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:48:28 +0000   Mon, 18 Mar 2024 12:45:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:48:28 +0000   Mon, 18 Mar 2024 12:45:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    addons-106685
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 6613acac122d44d8b209206584b45567
	  System UUID:                6613acac-122d-44d8-b209-206584b45567
	  Boot ID:                    37380014-2689-4f7f-9b39-095d095ff374
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-8kk78           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  gcp-auth                    gcp-auth-7d69788767-v52wf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  headlamp                    headlamp-5485c556b-pgf48                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 coredns-5dd5756b68-qf446                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m28s
	  kube-system                 etcd-addons-106685                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-apiserver-addons-106685               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-controller-manager-addons-106685      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-proxy-ll74j                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-addons-106685               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 metrics-server-69cf46c98-b9sd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  local-path-storage          local-path-provisioner-78b46b4d5c-q66bb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-2l56b             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node addons-106685 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node addons-106685 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node addons-106685 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m40s                  kubelet          Node addons-106685 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s                  kubelet          Node addons-106685 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s                  kubelet          Node addons-106685 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m40s                  kubelet          Node addons-106685 status is now: NodeReady
	  Normal  RegisteredNode           4m29s                  node-controller  Node addons-106685 event: Registered Node addons-106685 in Controller
	
	
	==> dmesg <==
	[Mar18 12:46] systemd-fstab-generator[1458]: Ignoring "noauto" option for root device
	[  +0.162208] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.055794] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.152305] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.571548] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.653936] kauditd_printk_skb: 25 callbacks suppressed
	[ +17.920585] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.155030] kauditd_printk_skb: 9 callbacks suppressed
	[Mar18 12:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.467189] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.759240] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.342717] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.363439] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.032283] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.005235] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.028644] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.035798] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.076685] kauditd_printk_skb: 49 callbacks suppressed
	[Mar18 12:48] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.148890] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.846942] kauditd_printk_skb: 2 callbacks suppressed
	[ +23.209538] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.942129] kauditd_printk_skb: 25 callbacks suppressed
	[Mar18 12:50] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.793441] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21] <==
	{"level":"info","ts":"2024-03-18T12:47:29.992426Z","caller":"traceutil/trace.go:171","msg":"trace[606132864] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1142; }","duration":"266.574168ms","start":"2024-03-18T12:47:29.725845Z","end":"2024-03-18T12:47:29.99242Z","steps":["trace[606132864] 'agreement among raft nodes before linearized reading'  (duration: 266.492068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.992451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.719284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-03-18T12:47:29.99257Z","caller":"traceutil/trace.go:171","msg":"trace[1732945] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1142; }","duration":"319.892275ms","start":"2024-03-18T12:47:29.672669Z","end":"2024-03-18T12:47:29.992561Z","steps":["trace[1732945] 'agreement among raft nodes before linearized reading'  (duration: 319.346423ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.992788Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:47:29.672656Z","time spent":"319.943657ms","remote":"127.0.0.1:53016","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":211,"response size":31,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"warn","ts":"2024-03-18T12:47:29.99307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.239834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81546"}
	{"level":"info","ts":"2024-03-18T12:47:29.993114Z","caller":"traceutil/trace.go:171","msg":"trace[920829100] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1142; }","duration":"237.285497ms","start":"2024-03-18T12:47:29.755822Z","end":"2024-03-18T12:47:29.993108Z","steps":["trace[920829100] 'agreement among raft nodes before linearized reading'  (duration: 237.071962ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.992882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.487415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13724"}
	{"level":"info","ts":"2024-03-18T12:47:29.99563Z","caller":"traceutil/trace.go:171","msg":"trace[1628500299] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1142; }","duration":"103.231614ms","start":"2024-03-18T12:47:29.892387Z","end":"2024-03-18T12:47:29.995618Z","steps":["trace[1628500299] 'agreement among raft nodes before linearized reading'  (duration: 100.459376ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:33.245862Z","caller":"traceutil/trace.go:171","msg":"trace[1681670925] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"223.018168ms","start":"2024-03-18T12:47:33.02283Z","end":"2024-03-18T12:47:33.245848Z","steps":["trace[1681670925] 'process raft request'  (duration: 222.906107ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:33.246672Z","caller":"traceutil/trace.go:171","msg":"trace[1807651838] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"210.986663ms","start":"2024-03-18T12:47:33.035673Z","end":"2024-03-18T12:47:33.24666Z","steps":["trace[1807651838] 'process raft request'  (duration: 210.646092ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:33.246815Z","caller":"traceutil/trace.go:171","msg":"trace[1968247310] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"187.537816ms","start":"2024-03-18T12:47:33.059272Z","end":"2024-03-18T12:47:33.246809Z","steps":["trace[1968247310] 'process raft request'  (duration: 187.090536ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:55.273233Z","caller":"traceutil/trace.go:171","msg":"trace[977565410] transaction","detail":"{read_only:false; response_revision:1332; number_of_response:1; }","duration":"217.193575ms","start":"2024-03-18T12:47:55.055971Z","end":"2024-03-18T12:47:55.273165Z","steps":["trace[977565410] 'process raft request'  (duration: 212.164064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:55.274341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.143742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:47:55.274378Z","caller":"traceutil/trace.go:171","msg":"trace[720055926] range","detail":"{range_begin:/registry/services/specs/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:1332; }","duration":"218.246559ms","start":"2024-03-18T12:47:55.056122Z","end":"2024-03-18T12:47:55.274369Z","steps":["trace[720055926] 'agreement among raft nodes before linearized reading'  (duration: 218.080596ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:55.275058Z","caller":"traceutil/trace.go:171","msg":"trace[1290267950] linearizableReadLoop","detail":"{readStateIndex:1379; appliedIndex:1378; }","duration":"218.856924ms","start":"2024-03-18T12:47:55.05619Z","end":"2024-03-18T12:47:55.275046Z","steps":["trace[1290267950] 'read index received'  (duration: 211.912573ms)","trace[1290267950] 'applied index is now lower than readState.Index'  (duration: 6.943412ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:48:05.689844Z","caller":"traceutil/trace.go:171","msg":"trace[2003246303] linearizableReadLoop","detail":"{readStateIndex:1456; appliedIndex:1455; }","duration":"324.492418ms","start":"2024-03-18T12:48:05.365335Z","end":"2024-03-18T12:48:05.689827Z","steps":["trace[2003246303] 'read index received'  (duration: 324.345387ms)","trace[2003246303] 'applied index is now lower than readState.Index'  (duration: 146.558µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:48:05.69004Z","caller":"traceutil/trace.go:171","msg":"trace[213056399] transaction","detail":"{read_only:false; response_revision:1407; number_of_response:1; }","duration":"384.89131ms","start":"2024-03-18T12:48:05.305126Z","end":"2024-03-18T12:48:05.690017Z","steps":["trace[213056399] 'process raft request'  (duration: 384.587878ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:48:05.690166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.8325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8648"}
	{"level":"info","ts":"2024-03-18T12:48:05.690211Z","caller":"traceutil/trace.go:171","msg":"trace[1495986193] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1407; }","duration":"324.892174ms","start":"2024-03-18T12:48:05.365312Z","end":"2024-03-18T12:48:05.690204Z","steps":["trace[1495986193] 'agreement among raft nodes before linearized reading'  (duration: 324.73941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:48:05.690258Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:48:05.365299Z","time spent":"324.953031ms","remote":"127.0.0.1:53128","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":8671,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-03-18T12:48:05.690276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.169416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8648"}
	{"level":"info","ts":"2024-03-18T12:48:05.69032Z","caller":"traceutil/trace.go:171","msg":"trace[47179391] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1407; }","duration":"296.212599ms","start":"2024-03-18T12:48:05.394102Z","end":"2024-03-18T12:48:05.690314Z","steps":["trace[47179391] 'agreement among raft nodes before linearized reading'  (duration: 296.140777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:48:05.690209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:48:05.305109Z","time spent":"384.994255ms","remote":"127.0.0.1:53194","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1379 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-03-18T12:48:20.813805Z","caller":"traceutil/trace.go:171","msg":"trace[2070353763] transaction","detail":"{read_only:false; response_revision:1535; number_of_response:1; }","duration":"397.378232ms","start":"2024-03-18T12:48:20.416389Z","end":"2024-03-18T12:48:20.813767Z","steps":["trace[2070353763] 'process raft request'  (duration: 396.987114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:48:20.813963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:48:20.416373Z","time spent":"397.507314ms","remote":"127.0.0.1:53122","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1534 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384] <==
	2024/03/18 12:47:36 GCP Auth Webhook started!
	2024/03/18 12:47:37 Ready to marshal response ...
	2024/03/18 12:47:37 Ready to write response ...
	2024/03/18 12:47:37 Ready to marshal response ...
	2024/03/18 12:47:37 Ready to write response ...
	2024/03/18 12:47:48 Ready to marshal response ...
	2024/03/18 12:47:48 Ready to write response ...
	2024/03/18 12:47:48 Ready to marshal response ...
	2024/03/18 12:47:48 Ready to write response ...
	2024/03/18 12:47:49 Ready to marshal response ...
	2024/03/18 12:47:49 Ready to write response ...
	2024/03/18 12:47:52 Ready to marshal response ...
	2024/03/18 12:47:52 Ready to write response ...
	2024/03/18 12:47:57 Ready to marshal response ...
	2024/03/18 12:47:57 Ready to write response ...
	2024/03/18 12:48:08 Ready to marshal response ...
	2024/03/18 12:48:08 Ready to write response ...
	2024/03/18 12:48:08 Ready to marshal response ...
	2024/03/18 12:48:08 Ready to write response ...
	2024/03/18 12:48:09 Ready to marshal response ...
	2024/03/18 12:48:09 Ready to write response ...
	2024/03/18 12:48:31 Ready to marshal response ...
	2024/03/18 12:48:31 Ready to write response ...
	2024/03/18 12:50:24 Ready to marshal response ...
	2024/03/18 12:50:24 Ready to write response ...
	
	
	==> kernel <==
	 12:50:35 up 5 min,  0 users,  load average: 1.11, 1.30, 0.64
	Linux addons-106685 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837] <==
	I0318 12:48:02.128095       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0318 12:48:02.151799       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0318 12:48:03.213801       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0318 12:48:08.953379       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.10.181"}
	I0318 12:48:09.742750       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0318 12:48:48.626139       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.626225       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:48:48.642938       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.643009       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:48:48.664168       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.664244       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:48:48.689059       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.689186       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:48:48.730355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.730431       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:48:48.741292       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.741391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:48:48.805737       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:48:48.805826       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0318 12:48:49.731094       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0318 12:48:49.807117       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0318 12:48:49.810223       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0318 12:48:51.580287       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 12:49:51.581034       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 12:50:24.551852       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.91.196"}
	
	
	==> kube-controller-manager [2ea1767b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2] <==
	W0318 12:49:26.466804       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:49:26.467002       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:49:27.842739       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:49:27.842793       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:49:31.374408       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:49:31.374455       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:49:53.923216       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:49:53.923375       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:49:54.131666       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:49:54.131811       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:50:04.194655       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:50:04.194829       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:50:08.327554       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:50:08.327618       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 12:50:24.327740       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0318 12:50:24.367984       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-8kk78"
	I0318 12:50:24.378755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.93868ms"
	I0318 12:50:24.393038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.055479ms"
	I0318 12:50:24.429669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.74151ms"
	I0318 12:50:24.429826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="76.581µs"
	I0318 12:50:27.325605       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0318 12:50:27.330247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="6.713µs"
	I0318 12:50:27.337297       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0318 12:50:27.514912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.349338ms"
	I0318 12:50:27.515218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="187.97µs"
	
	
	==> kube-proxy [7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10] <==
	I0318 12:46:09.960008       1 server_others.go:69] "Using iptables proxy"
	I0318 12:46:09.980085       1 node.go:141] Successfully retrieved node IP: 192.168.39.205
	I0318 12:46:10.087383       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:46:10.087462       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:46:10.096878       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:46:10.096941       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:46:10.097111       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:46:10.097141       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:46:10.099325       1 config.go:188] "Starting service config controller"
	I0318 12:46:10.099353       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:46:10.099373       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:46:10.099387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:46:10.099798       1 config.go:315] "Starting node config controller"
	I0318 12:46:10.099804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:46:10.200441       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:46:10.200448       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:46:10.200629       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d] <==
	W0318 12:45:51.717109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:51.717122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.535854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.535968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.612145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:45:52.612236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:45:52.621660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.621709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.631786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:45:52.631862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:45:52.791022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:45:52.791072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:45:52.829899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.829947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.863852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:45:52.863928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 12:45:52.872921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.873000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.913160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 12:45:52.913276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 12:45:53.061394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:45:53.061548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:45:53.251248       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:45:53.251352       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:45:56.482755       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:50:24 addons-106685 kubelet[1256]: I0318 12:50:24.388313    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="683115f5-0641-4123-81af-970fe5185bbe" containerName="liveness-probe"
	Mar 18 12:50:24 addons-106685 kubelet[1256]: I0318 12:50:24.388344    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="683115f5-0641-4123-81af-970fe5185bbe" containerName="hostpath"
	Mar 18 12:50:24 addons-106685 kubelet[1256]: I0318 12:50:24.411479    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl72x\" (UniqueName: \"kubernetes.io/projected/1e70aca7-4332-4632-a922-f5b6658bde40-kube-api-access-xl72x\") pod \"hello-world-app-5d77478584-8kk78\" (UID: \"1e70aca7-4332-4632-a922-f5b6658bde40\") " pod="default/hello-world-app-5d77478584-8kk78"
	Mar 18 12:50:24 addons-106685 kubelet[1256]: I0318 12:50:24.411718    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1e70aca7-4332-4632-a922-f5b6658bde40-gcp-creds\") pod \"hello-world-app-5d77478584-8kk78\" (UID: \"1e70aca7-4332-4632-a922-f5b6658bde40\") " pod="default/hello-world-app-5d77478584-8kk78"
	Mar 18 12:50:25 addons-106685 kubelet[1256]: I0318 12:50:25.622785    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8sc2\" (UniqueName: \"kubernetes.io/projected/b2c4ec5a-1796-470c-b324-7c018ab2799d-kube-api-access-g8sc2\") pod \"b2c4ec5a-1796-470c-b324-7c018ab2799d\" (UID: \"b2c4ec5a-1796-470c-b324-7c018ab2799d\") "
	Mar 18 12:50:25 addons-106685 kubelet[1256]: I0318 12:50:25.630705    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c4ec5a-1796-470c-b324-7c018ab2799d-kube-api-access-g8sc2" (OuterVolumeSpecName: "kube-api-access-g8sc2") pod "b2c4ec5a-1796-470c-b324-7c018ab2799d" (UID: "b2c4ec5a-1796-470c-b324-7c018ab2799d"). InnerVolumeSpecName "kube-api-access-g8sc2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:50:25 addons-106685 kubelet[1256]: I0318 12:50:25.723742    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g8sc2\" (UniqueName: \"kubernetes.io/projected/b2c4ec5a-1796-470c-b324-7c018ab2799d-kube-api-access-g8sc2\") on node \"addons-106685\" DevicePath \"\""
	Mar 18 12:50:26 addons-106685 kubelet[1256]: I0318 12:50:26.481159    1256 scope.go:117] "RemoveContainer" containerID="91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f"
	Mar 18 12:50:26 addons-106685 kubelet[1256]: I0318 12:50:26.660026    1256 scope.go:117] "RemoveContainer" containerID="91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f"
	Mar 18 12:50:26 addons-106685 kubelet[1256]: E0318 12:50:26.660949    1256 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f\": container with ID starting with 91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f not found: ID does not exist" containerID="91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f"
	Mar 18 12:50:26 addons-106685 kubelet[1256]: I0318 12:50:26.661009    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f"} err="failed to get container status \"91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f\": rpc error: code = NotFound desc = could not find container \"91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f\": container with ID starting with 91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f not found: ID does not exist"
	Mar 18 12:50:27 addons-106685 kubelet[1256]: I0318 12:50:27.023777    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b2c4ec5a-1796-470c-b324-7c018ab2799d" path="/var/lib/kubelet/pods/b2c4ec5a-1796-470c-b324-7c018ab2799d/volumes"
	Mar 18 12:50:29 addons-106685 kubelet[1256]: I0318 12:50:29.017225    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="17f868c2-2f63-421a-b995-ad4d2af21136" path="/var/lib/kubelet/pods/17f868c2-2f63-421a-b995-ad4d2af21136/volumes"
	Mar 18 12:50:29 addons-106685 kubelet[1256]: I0318 12:50:29.017790    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="55455782-5bbd-45e1-9f4a-faf2d6cbbe54" path="/var/lib/kubelet/pods/55455782-5bbd-45e1-9f4a-faf2d6cbbe54/volumes"
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.519760    1256 scope.go:117] "RemoveContainer" containerID="bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66"
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.536159    1256 scope.go:117] "RemoveContainer" containerID="bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66"
	Mar 18 12:50:30 addons-106685 kubelet[1256]: E0318 12:50:30.536769    1256 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66\": container with ID starting with bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66 not found: ID does not exist" containerID="bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66"
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.536809    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66"} err="failed to get container status \"bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66\": rpc error: code = NotFound desc = could not find container \"bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66\": container with ID starting with bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66 not found: ID does not exist"
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.559309    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmqpf\" (UniqueName: \"kubernetes.io/projected/82ecf218-fe2e-4983-ac66-4aceed2fb70e-kube-api-access-nmqpf\") pod \"82ecf218-fe2e-4983-ac66-4aceed2fb70e\" (UID: \"82ecf218-fe2e-4983-ac66-4aceed2fb70e\") "
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.559352    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82ecf218-fe2e-4983-ac66-4aceed2fb70e-webhook-cert\") pod \"82ecf218-fe2e-4983-ac66-4aceed2fb70e\" (UID: \"82ecf218-fe2e-4983-ac66-4aceed2fb70e\") "
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.562676    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ecf218-fe2e-4983-ac66-4aceed2fb70e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "82ecf218-fe2e-4983-ac66-4aceed2fb70e" (UID: "82ecf218-fe2e-4983-ac66-4aceed2fb70e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.565158    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ecf218-fe2e-4983-ac66-4aceed2fb70e-kube-api-access-nmqpf" (OuterVolumeSpecName: "kube-api-access-nmqpf") pod "82ecf218-fe2e-4983-ac66-4aceed2fb70e" (UID: "82ecf218-fe2e-4983-ac66-4aceed2fb70e"). InnerVolumeSpecName "kube-api-access-nmqpf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.660229    1256 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82ecf218-fe2e-4983-ac66-4aceed2fb70e-webhook-cert\") on node \"addons-106685\" DevicePath \"\""
	Mar 18 12:50:30 addons-106685 kubelet[1256]: I0318 12:50:30.660268    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nmqpf\" (UniqueName: \"kubernetes.io/projected/82ecf218-fe2e-4983-ac66-4aceed2fb70e-kube-api-access-nmqpf\") on node \"addons-106685\" DevicePath \"\""
	Mar 18 12:50:31 addons-106685 kubelet[1256]: I0318 12:50:31.019053    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="82ecf218-fe2e-4983-ac66-4aceed2fb70e" path="/var/lib/kubelet/pods/82ecf218-fe2e-4983-ac66-4aceed2fb70e/volumes"
	
	
	==> storage-provisioner [7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02] <==
	I0318 12:46:18.547676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 12:46:18.605006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 12:46:18.605046       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 12:46:18.680105       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 12:46:18.680244       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-106685_2bc397de-f037-44b1-b5c8-1d7cc7c08af5!
	I0318 12:46:18.684149       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b765950-073e-425a-929e-664db04b17c7", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-106685_2bc397de-f037-44b1-b5c8-1d7cc7c08af5 became leader
	I0318 12:46:18.780654       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-106685_2bc397de-f037-44b1-b5c8-1d7cc7c08af5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-106685 -n addons-106685
helpers_test.go:261: (dbg) Run:  kubectl --context addons-106685 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (159.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (11.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.933124ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-b9sd4" [ef2ad747-2bac-41dc-9aa5-96fa6e675413] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005693s
addons_test.go:415: (dbg) Run:  kubectl --context addons-106685 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-106685 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (725.098593ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:47:55.930090 1077273 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:47:55.930258 1077273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:47:55.930275 1077273 out.go:304] Setting ErrFile to fd 2...
	I0318 12:47:55.930282 1077273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:47:55.930494 1077273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 12:47:55.930782 1077273 mustload.go:65] Loading cluster: addons-106685
	I0318 12:47:55.931324 1077273 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:47:55.931356 1077273 addons.go:597] checking whether the cluster is paused
	I0318 12:47:55.931452 1077273 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:47:55.931467 1077273 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:47:55.931933 1077273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:47:55.931993 1077273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:47:55.950858 1077273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0318 12:47:55.951529 1077273 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:47:55.952304 1077273 main.go:141] libmachine: Using API Version  1
	I0318 12:47:55.952336 1077273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:47:55.952850 1077273 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:47:55.953109 1077273 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:47:55.954728 1077273 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:47:55.954991 1077273 ssh_runner.go:195] Run: systemctl --version
	I0318 12:47:55.955025 1077273 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:47:55.957362 1077273 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:47:55.957744 1077273 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:47:55.957783 1077273 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:47:55.957899 1077273 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:47:55.958114 1077273 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:47:55.958302 1077273 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:47:55.958495 1077273 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:47:56.123263 1077273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:47:56.123375 1077273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:47:56.337076 1077273 cri.go:89] found id: "639d7cbd635bec7a03e433b3cc4baca80eb0ab5101ec4eb2add72f33ddcf4cb9"
	I0318 12:47:56.337116 1077273 cri.go:89] found id: "efa3659d6d1eb75a0bd3c9794b44859ccde04b08e91f5ca51e072a420d4a7fae"
	I0318 12:47:56.337119 1077273 cri.go:89] found id: "73fa199d2eebb92ad729c83c4ae7050a834efef626a70fda6bf1e3f03b6a660f"
	I0318 12:47:56.337122 1077273 cri.go:89] found id: "57ab9d6b3fea078f0800ba2b91a8a3d10997c4bc3ed05d0b4b9d1cf36a4a2bb4"
	I0318 12:47:56.337125 1077273 cri.go:89] found id: "bf669c3480f1038adad49ae09bd8b3c1fd9f511e491c04b45b1f4840703a68e1"
	I0318 12:47:56.337129 1077273 cri.go:89] found id: "953dbb378e5b4006c55ff54db6c9ae3210057e97f8655cff6a04fc432f1b3877"
	I0318 12:47:56.337135 1077273 cri.go:89] found id: "ff7e80ed5fdbb0bf562b6e5e2bc330de6d795a2f1eb9474554f2ba90ce65132e"
	I0318 12:47:56.337139 1077273 cri.go:89] found id: "4af9eaffca84bf0092cd1e0c0e39404e2b60d5f1067b79beb7b0ddde293f53f4"
	I0318 12:47:56.337143 1077273 cri.go:89] found id: "a61ae866119874a73ecbc37488e1861c7338beef7a1759dade1c9586c13d9614"
	I0318 12:47:56.337153 1077273 cri.go:89] found id: "427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e"
	I0318 12:47:56.337157 1077273 cri.go:89] found id: "9c6a76fb3f64f3934a68cb9c07fa63fa2b26b65b1d3cba263722f50f42704cc8"
	I0318 12:47:56.337161 1077273 cri.go:89] found id: "927c5283e918f2565caa60ab40c7f117d09dcbc6c4bc6d96871ba9312dc9d1e6"
	I0318 12:47:56.337167 1077273 cri.go:89] found id: "a4e7fe2f55adef04799de5ea8b0f32ff87df300f20ad94186b44de5ff7250573"
	I0318 12:47:56.337172 1077273 cri.go:89] found id: "91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f"
	I0318 12:47:56.337180 1077273 cri.go:89] found id: "9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd"
	I0318 12:47:56.337188 1077273 cri.go:89] found id: "e3ff60f6656212ae190bae254c369eb159ea877069b30f40188e0961a45a2706"
	I0318 12:47:56.337192 1077273 cri.go:89] found id: "7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02"
	I0318 12:47:56.337202 1077273 cri.go:89] found id: "cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553"
	I0318 12:47:56.337209 1077273 cri.go:89] found id: "7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10"
	I0318 12:47:56.337213 1077273 cri.go:89] found id: "89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21"
	I0318 12:47:56.337220 1077273 cri.go:89] found id: "43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d"
	I0318 12:47:56.337223 1077273 cri.go:89] found id: "2ea1767b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2"
	I0318 12:47:56.337225 1077273 cri.go:89] found id: "cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837"
	I0318 12:47:56.337228 1077273 cri.go:89] found id: ""
	I0318 12:47:56.337297 1077273 ssh_runner.go:195] Run: sudo runc list -f json
	I0318 12:47:56.582472 1077273 main.go:141] libmachine: Making call to close driver server
	I0318 12:47:56.582492 1077273 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:47:56.582862 1077273 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:47:56.582900 1077273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:47:56.582956 1077273 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:47:56.585612 1077273 out.go:177] 
	W0318 12:47:56.587664 1077273 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T12:47:56Z" level=error msg="stat /run/runc/927c5283e918f2565caa60ab40c7f117d09dcbc6c4bc6d96871ba9312dc9d1e6: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T12:47:56Z" level=error msg="stat /run/runc/927c5283e918f2565caa60ab40c7f117d09dcbc6c4bc6d96871ba9312dc9d1e6: no such file or directory"
	
	W0318 12:47:56.587689 1077273 out.go:239] * 
	* 
	W0318 12:47:56.592113 1077273 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 12:47:56.593734 1077273 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:434: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-106685 addons disable metrics-server --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-106685 -n addons-106685
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 logs -n 25: (3.600417564s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |                     |
	|         | -p download-only-091393                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| delete  | -p download-only-091393                                                                     | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| start   | -o=json --download-only                                                                     | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |                     |
	|         | -p download-only-994148                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-994148                                                                     | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| start   | -o=json --download-only                                                                     | download-only-954927 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | -p download-only-954927                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-954927                                                                     | download-only-954927 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-091393                                                                     | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-994148                                                                     | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-954927                                                                     | download-only-954927 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-502218 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | binary-mirror-502218                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38477                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-502218                                                                     | binary-mirror-502218 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| addons  | enable dashboard -p                                                                         | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | addons-106685                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | addons-106685                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-106685 --wait=true                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-106685 ssh cat                                                                       | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /opt/local-path-provisioner/pvc-e86d5e17-8190-4e06-8916-09db8624ca3e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-106685 ip                                                                            | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	| addons  | addons-106685 addons disable                                                                | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-106685 addons                                                                        | addons-106685        | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC |                     |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:45:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:45:11.015268 1075954 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:45:11.015544 1075954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:45:11.015553 1075954 out.go:304] Setting ErrFile to fd 2...
	I0318 12:45:11.015557 1075954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:45:11.015765 1075954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 12:45:11.016475 1075954 out.go:298] Setting JSON to false
	I0318 12:45:11.017639 1075954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16058,"bootTime":1710749853,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:45:11.017715 1075954 start.go:139] virtualization: kvm guest
	I0318 12:45:11.019865 1075954 out.go:177] * [addons-106685] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:45:11.021600 1075954 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 12:45:11.022909 1075954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:45:11.021689 1075954 notify.go:220] Checking for updates...
	I0318 12:45:11.025577 1075954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:45:11.026988 1075954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:11.028362 1075954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:45:11.029731 1075954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:45:11.031241 1075954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:45:11.064025 1075954 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 12:45:11.065587 1075954 start.go:297] selected driver: kvm2
	I0318 12:45:11.065616 1075954 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:45:11.065631 1075954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:45:11.066336 1075954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:45:11.066438 1075954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:45:11.083338 1075954 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:45:11.083402 1075954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:45:11.083619 1075954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:45:11.083687 1075954 cni.go:84] Creating CNI manager for ""
	I0318 12:45:11.083701 1075954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:11.083710 1075954 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:45:11.083760 1075954 start.go:340] cluster config:
	{Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:11.083895 1075954 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:45:11.085948 1075954 out.go:177] * Starting "addons-106685" primary control-plane node in "addons-106685" cluster
	I0318 12:45:11.087467 1075954 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:45:11.087554 1075954 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:45:11.087569 1075954 cache.go:56] Caching tarball of preloaded images
	I0318 12:45:11.087691 1075954 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:45:11.087705 1075954 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:45:11.088942 1075954 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/config.json ...
	I0318 12:45:11.089080 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/config.json: {Name:mkb075179247883cdc6357e66c091da0632c780c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:11.089636 1075954 start.go:360] acquireMachinesLock for addons-106685: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:45:11.089745 1075954 start.go:364] duration metric: took 83.912µs to acquireMachinesLock for "addons-106685"
	I0318 12:45:11.089770 1075954 start.go:93] Provisioning new machine with config: &{Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:45:11.089870 1075954 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 12:45:11.091687 1075954 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 12:45:11.091991 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:11.092052 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:11.107470 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0318 12:45:11.108112 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:11.108746 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:45:11.108771 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:11.109173 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:11.109368 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:11.109562 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:11.109782 1075954 start.go:159] libmachine.API.Create for "addons-106685" (driver="kvm2")
	I0318 12:45:11.109812 1075954 client.go:168] LocalClient.Create starting
	I0318 12:45:11.109853 1075954 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 12:45:11.382933 1075954 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 12:45:11.835577 1075954 main.go:141] libmachine: Running pre-create checks...
	I0318 12:45:11.835603 1075954 main.go:141] libmachine: (addons-106685) Calling .PreCreateCheck
	I0318 12:45:11.836187 1075954 main.go:141] libmachine: (addons-106685) Calling .GetConfigRaw
	I0318 12:45:11.836711 1075954 main.go:141] libmachine: Creating machine...
	I0318 12:45:11.836728 1075954 main.go:141] libmachine: (addons-106685) Calling .Create
	I0318 12:45:11.836920 1075954 main.go:141] libmachine: (addons-106685) Creating KVM machine...
	I0318 12:45:11.838282 1075954 main.go:141] libmachine: (addons-106685) DBG | found existing default KVM network
	I0318 12:45:11.839122 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:11.838953 1075976 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0318 12:45:11.839174 1075954 main.go:141] libmachine: (addons-106685) DBG | created network xml: 
	I0318 12:45:11.839198 1075954 main.go:141] libmachine: (addons-106685) DBG | <network>
	I0318 12:45:11.839212 1075954 main.go:141] libmachine: (addons-106685) DBG |   <name>mk-addons-106685</name>
	I0318 12:45:11.839227 1075954 main.go:141] libmachine: (addons-106685) DBG |   <dns enable='no'/>
	I0318 12:45:11.839235 1075954 main.go:141] libmachine: (addons-106685) DBG |   
	I0318 12:45:11.839246 1075954 main.go:141] libmachine: (addons-106685) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 12:45:11.839258 1075954 main.go:141] libmachine: (addons-106685) DBG |     <dhcp>
	I0318 12:45:11.839270 1075954 main.go:141] libmachine: (addons-106685) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 12:45:11.839280 1075954 main.go:141] libmachine: (addons-106685) DBG |     </dhcp>
	I0318 12:45:11.839291 1075954 main.go:141] libmachine: (addons-106685) DBG |   </ip>
	I0318 12:45:11.839298 1075954 main.go:141] libmachine: (addons-106685) DBG |   
	I0318 12:45:11.839305 1075954 main.go:141] libmachine: (addons-106685) DBG | </network>
	I0318 12:45:11.839336 1075954 main.go:141] libmachine: (addons-106685) DBG | 
	I0318 12:45:11.844813 1075954 main.go:141] libmachine: (addons-106685) DBG | trying to create private KVM network mk-addons-106685 192.168.39.0/24...
	I0318 12:45:11.916130 1075954 main.go:141] libmachine: (addons-106685) DBG | private KVM network mk-addons-106685 192.168.39.0/24 created
	I0318 12:45:11.916172 1075954 main.go:141] libmachine: (addons-106685) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685 ...
	I0318 12:45:11.916197 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:11.916093 1075976 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:11.916222 1075954 main.go:141] libmachine: (addons-106685) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:45:11.916243 1075954 main.go:141] libmachine: (addons-106685) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:45:12.163608 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:12.163410 1075976 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa...
	I0318 12:45:12.244894 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:12.244720 1075976 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/addons-106685.rawdisk...
	I0318 12:45:12.244935 1075954 main.go:141] libmachine: (addons-106685) DBG | Writing magic tar header
	I0318 12:45:12.244959 1075954 main.go:141] libmachine: (addons-106685) DBG | Writing SSH key tar header
	I0318 12:45:12.244979 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:12.244851 1075976 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685 ...
	I0318 12:45:12.245035 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685
	I0318 12:45:12.245054 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 12:45:12.245068 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685 (perms=drwx------)
	I0318 12:45:12.245084 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:45:12.245091 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 12:45:12.245097 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:12.245106 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 12:45:12.245115 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:45:12.245135 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:45:12.245156 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 12:45:12.245169 1075954 main.go:141] libmachine: (addons-106685) DBG | Checking permissions on dir: /home
	I0318 12:45:12.245184 1075954 main.go:141] libmachine: (addons-106685) DBG | Skipping /home - not owner
	I0318 12:45:12.245196 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:45:12.245205 1075954 main.go:141] libmachine: (addons-106685) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:45:12.245215 1075954 main.go:141] libmachine: (addons-106685) Creating domain...
	I0318 12:45:12.246437 1075954 main.go:141] libmachine: (addons-106685) define libvirt domain using xml: 
	I0318 12:45:12.246475 1075954 main.go:141] libmachine: (addons-106685) <domain type='kvm'>
	I0318 12:45:12.246487 1075954 main.go:141] libmachine: (addons-106685)   <name>addons-106685</name>
	I0318 12:45:12.246495 1075954 main.go:141] libmachine: (addons-106685)   <memory unit='MiB'>4000</memory>
	I0318 12:45:12.246503 1075954 main.go:141] libmachine: (addons-106685)   <vcpu>2</vcpu>
	I0318 12:45:12.246514 1075954 main.go:141] libmachine: (addons-106685)   <features>
	I0318 12:45:12.246525 1075954 main.go:141] libmachine: (addons-106685)     <acpi/>
	I0318 12:45:12.246534 1075954 main.go:141] libmachine: (addons-106685)     <apic/>
	I0318 12:45:12.246546 1075954 main.go:141] libmachine: (addons-106685)     <pae/>
	I0318 12:45:12.246560 1075954 main.go:141] libmachine: (addons-106685)     
	I0318 12:45:12.246573 1075954 main.go:141] libmachine: (addons-106685)   </features>
	I0318 12:45:12.246583 1075954 main.go:141] libmachine: (addons-106685)   <cpu mode='host-passthrough'>
	I0318 12:45:12.246595 1075954 main.go:141] libmachine: (addons-106685)   
	I0318 12:45:12.246607 1075954 main.go:141] libmachine: (addons-106685)   </cpu>
	I0318 12:45:12.246618 1075954 main.go:141] libmachine: (addons-106685)   <os>
	I0318 12:45:12.246630 1075954 main.go:141] libmachine: (addons-106685)     <type>hvm</type>
	I0318 12:45:12.246648 1075954 main.go:141] libmachine: (addons-106685)     <boot dev='cdrom'/>
	I0318 12:45:12.246672 1075954 main.go:141] libmachine: (addons-106685)     <boot dev='hd'/>
	I0318 12:45:12.246685 1075954 main.go:141] libmachine: (addons-106685)     <bootmenu enable='no'/>
	I0318 12:45:12.246699 1075954 main.go:141] libmachine: (addons-106685)   </os>
	I0318 12:45:12.246711 1075954 main.go:141] libmachine: (addons-106685)   <devices>
	I0318 12:45:12.246720 1075954 main.go:141] libmachine: (addons-106685)     <disk type='file' device='cdrom'>
	I0318 12:45:12.246736 1075954 main.go:141] libmachine: (addons-106685)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/boot2docker.iso'/>
	I0318 12:45:12.246747 1075954 main.go:141] libmachine: (addons-106685)       <target dev='hdc' bus='scsi'/>
	I0318 12:45:12.246755 1075954 main.go:141] libmachine: (addons-106685)       <readonly/>
	I0318 12:45:12.246761 1075954 main.go:141] libmachine: (addons-106685)     </disk>
	I0318 12:45:12.246772 1075954 main.go:141] libmachine: (addons-106685)     <disk type='file' device='disk'>
	I0318 12:45:12.246790 1075954 main.go:141] libmachine: (addons-106685)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:45:12.246807 1075954 main.go:141] libmachine: (addons-106685)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/addons-106685.rawdisk'/>
	I0318 12:45:12.246817 1075954 main.go:141] libmachine: (addons-106685)       <target dev='hda' bus='virtio'/>
	I0318 12:45:12.246828 1075954 main.go:141] libmachine: (addons-106685)     </disk>
	I0318 12:45:12.246838 1075954 main.go:141] libmachine: (addons-106685)     <interface type='network'>
	I0318 12:45:12.246847 1075954 main.go:141] libmachine: (addons-106685)       <source network='mk-addons-106685'/>
	I0318 12:45:12.246858 1075954 main.go:141] libmachine: (addons-106685)       <model type='virtio'/>
	I0318 12:45:12.246886 1075954 main.go:141] libmachine: (addons-106685)     </interface>
	I0318 12:45:12.246912 1075954 main.go:141] libmachine: (addons-106685)     <interface type='network'>
	I0318 12:45:12.246937 1075954 main.go:141] libmachine: (addons-106685)       <source network='default'/>
	I0318 12:45:12.246963 1075954 main.go:141] libmachine: (addons-106685)       <model type='virtio'/>
	I0318 12:45:12.246973 1075954 main.go:141] libmachine: (addons-106685)     </interface>
	I0318 12:45:12.246980 1075954 main.go:141] libmachine: (addons-106685)     <serial type='pty'>
	I0318 12:45:12.246989 1075954 main.go:141] libmachine: (addons-106685)       <target port='0'/>
	I0318 12:45:12.246996 1075954 main.go:141] libmachine: (addons-106685)     </serial>
	I0318 12:45:12.247004 1075954 main.go:141] libmachine: (addons-106685)     <console type='pty'>
	I0318 12:45:12.247018 1075954 main.go:141] libmachine: (addons-106685)       <target type='serial' port='0'/>
	I0318 12:45:12.247026 1075954 main.go:141] libmachine: (addons-106685)     </console>
	I0318 12:45:12.247035 1075954 main.go:141] libmachine: (addons-106685)     <rng model='virtio'>
	I0318 12:45:12.247046 1075954 main.go:141] libmachine: (addons-106685)       <backend model='random'>/dev/random</backend>
	I0318 12:45:12.247057 1075954 main.go:141] libmachine: (addons-106685)     </rng>
	I0318 12:45:12.247065 1075954 main.go:141] libmachine: (addons-106685)     
	I0318 12:45:12.247071 1075954 main.go:141] libmachine: (addons-106685)     
	I0318 12:45:12.247079 1075954 main.go:141] libmachine: (addons-106685)   </devices>
	I0318 12:45:12.247089 1075954 main.go:141] libmachine: (addons-106685) </domain>
	I0318 12:45:12.247099 1075954 main.go:141] libmachine: (addons-106685) 
	I0318 12:45:12.251787 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:87:c8:5a in network default
	I0318 12:45:12.252484 1075954 main.go:141] libmachine: (addons-106685) Ensuring networks are active...
	I0318 12:45:12.252507 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:12.253238 1075954 main.go:141] libmachine: (addons-106685) Ensuring network default is active
	I0318 12:45:12.253557 1075954 main.go:141] libmachine: (addons-106685) Ensuring network mk-addons-106685 is active
	I0318 12:45:12.254000 1075954 main.go:141] libmachine: (addons-106685) Getting domain xml...
	I0318 12:45:12.254759 1075954 main.go:141] libmachine: (addons-106685) Creating domain...
	I0318 12:45:13.462813 1075954 main.go:141] libmachine: (addons-106685) Waiting to get IP...
	I0318 12:45:13.463677 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:13.464099 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:13.464121 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:13.464078 1075976 retry.go:31] will retry after 290.892875ms: waiting for machine to come up
	I0318 12:45:13.756719 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:13.757214 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:13.757259 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:13.757156 1075976 retry.go:31] will retry after 352.926024ms: waiting for machine to come up
	I0318 12:45:14.111847 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:14.112276 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:14.112312 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:14.112233 1075976 retry.go:31] will retry after 414.178519ms: waiting for machine to come up
	I0318 12:45:14.527693 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:14.528085 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:14.528117 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:14.528022 1075976 retry.go:31] will retry after 567.10278ms: waiting for machine to come up
	I0318 12:45:15.096787 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:15.097158 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:15.097211 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:15.097100 1075976 retry.go:31] will retry after 566.579197ms: waiting for machine to come up
	I0318 12:45:15.664978 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:15.665384 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:15.665419 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:15.665328 1075976 retry.go:31] will retry after 918.670819ms: waiting for machine to come up
	I0318 12:45:16.586278 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:16.586742 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:16.586772 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:16.586686 1075976 retry.go:31] will retry after 774.966807ms: waiting for machine to come up
	I0318 12:45:17.363763 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:17.364163 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:17.364197 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:17.364114 1075976 retry.go:31] will retry after 1.48184225s: waiting for machine to come up
	I0318 12:45:18.847757 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:18.848261 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:18.848289 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:18.848219 1075976 retry.go:31] will retry after 1.536147853s: waiting for machine to come up
	I0318 12:45:20.385864 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:20.386322 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:20.386352 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:20.386266 1075976 retry.go:31] will retry after 2.056836281s: waiting for machine to come up
	I0318 12:45:22.445269 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:22.445724 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:22.445760 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:22.445676 1075976 retry.go:31] will retry after 2.566944137s: waiting for machine to come up
	I0318 12:45:25.015803 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:25.016350 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:25.016384 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:25.016293 1075976 retry.go:31] will retry after 3.537481726s: waiting for machine to come up
	I0318 12:45:28.556682 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:28.557141 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find current IP address of domain addons-106685 in network mk-addons-106685
	I0318 12:45:28.557170 1075954 main.go:141] libmachine: (addons-106685) DBG | I0318 12:45:28.557099 1075976 retry.go:31] will retry after 4.234625852s: waiting for machine to come up
	I0318 12:45:32.794340 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:32.794847 1075954 main.go:141] libmachine: (addons-106685) Found IP for machine: 192.168.39.205
	I0318 12:45:32.794902 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has current primary IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:32.794913 1075954 main.go:141] libmachine: (addons-106685) Reserving static IP address...
	I0318 12:45:32.795227 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find host DHCP lease matching {name: "addons-106685", mac: "52:54:00:ae:c4:53", ip: "192.168.39.205"} in network mk-addons-106685
	I0318 12:45:32.872091 1075954 main.go:141] libmachine: (addons-106685) DBG | Getting to WaitForSSH function...
	I0318 12:45:32.872131 1075954 main.go:141] libmachine: (addons-106685) Reserved static IP address: 192.168.39.205
	I0318 12:45:32.872181 1075954 main.go:141] libmachine: (addons-106685) Waiting for SSH to be available...
	I0318 12:45:32.874712 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:32.875065 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685
	I0318 12:45:32.875105 1075954 main.go:141] libmachine: (addons-106685) DBG | unable to find defined IP address of network mk-addons-106685 interface with MAC address 52:54:00:ae:c4:53
	I0318 12:45:32.875315 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH client type: external
	I0318 12:45:32.875342 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa (-rw-------)
	I0318 12:45:32.875380 1075954 main.go:141] libmachine: (addons-106685) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:45:32.875405 1075954 main.go:141] libmachine: (addons-106685) DBG | About to run SSH command:
	I0318 12:45:32.875424 1075954 main.go:141] libmachine: (addons-106685) DBG | exit 0
	I0318 12:45:32.879655 1075954 main.go:141] libmachine: (addons-106685) DBG | SSH cmd err, output: exit status 255: 
	I0318 12:45:32.879681 1075954 main.go:141] libmachine: (addons-106685) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 12:45:32.879688 1075954 main.go:141] libmachine: (addons-106685) DBG | command : exit 0
	I0318 12:45:32.879696 1075954 main.go:141] libmachine: (addons-106685) DBG | err     : exit status 255
	I0318 12:45:32.879705 1075954 main.go:141] libmachine: (addons-106685) DBG | output  : 
	I0318 12:45:35.881929 1075954 main.go:141] libmachine: (addons-106685) DBG | Getting to WaitForSSH function...
	I0318 12:45:35.884936 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:35.885448 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:35.885489 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:35.885551 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH client type: external
	I0318 12:45:35.885572 1075954 main.go:141] libmachine: (addons-106685) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa (-rw-------)
	I0318 12:45:35.885609 1075954 main.go:141] libmachine: (addons-106685) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:45:35.885655 1075954 main.go:141] libmachine: (addons-106685) DBG | About to run SSH command:
	I0318 12:45:35.885673 1075954 main.go:141] libmachine: (addons-106685) DBG | exit 0
	I0318 12:45:36.012423 1075954 main.go:141] libmachine: (addons-106685) DBG | SSH cmd err, output: <nil>: 
	I0318 12:45:36.012870 1075954 main.go:141] libmachine: (addons-106685) KVM machine creation complete!
	I0318 12:45:36.013332 1075954 main.go:141] libmachine: (addons-106685) Calling .GetConfigRaw
	I0318 12:45:36.068639 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:36.131266 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:36.131489 1075954 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:45:36.131506 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:45:36.133232 1075954 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:45:36.133256 1075954 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:45:36.133263 1075954 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:45:36.133283 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.136162 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.136497 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.136536 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.136664 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.136871 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.137037 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.137180 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.137354 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.137642 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.137661 1075954 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:45:36.243627 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:36.243662 1075954 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:45:36.243671 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.246659 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.247171 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.247207 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.247361 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.247613 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.247822 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.248038 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.248206 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.248388 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.248398 1075954 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:45:36.356991 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:45:36.357080 1075954 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:45:36.357090 1075954 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:45:36.357098 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:36.357441 1075954 buildroot.go:166] provisioning hostname "addons-106685"
	I0318 12:45:36.357479 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:36.357700 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.360332 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.360708 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.360740 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.360860 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.360998 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.361178 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.361289 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.361425 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.361673 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.361692 1075954 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-106685 && echo "addons-106685" | sudo tee /etc/hostname
	I0318 12:45:36.483757 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-106685
	
	I0318 12:45:36.483786 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.486764 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.487132 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.487164 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.487298 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.487544 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.487760 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.487974 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.488246 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:36.488510 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:36.488533 1075954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-106685' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-106685/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-106685' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:45:36.605298 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:36.605337 1075954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 12:45:36.605379 1075954 buildroot.go:174] setting up certificates
	I0318 12:45:36.605390 1075954 provision.go:84] configureAuth start
	I0318 12:45:36.605401 1075954 main.go:141] libmachine: (addons-106685) Calling .GetMachineName
	I0318 12:45:36.605791 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:36.608648 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.609071 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.609103 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.609254 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.611764 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.612363 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.612399 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.612611 1075954 provision.go:143] copyHostCerts
	I0318 12:45:36.612715 1075954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 12:45:36.612879 1075954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 12:45:36.612998 1075954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 12:45:36.613072 1075954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.addons-106685 san=[127.0.0.1 192.168.39.205 addons-106685 localhost minikube]
	I0318 12:45:36.867664 1075954 provision.go:177] copyRemoteCerts
	I0318 12:45:36.867758 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:45:36.867794 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:36.870932 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.871239 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:36.871266 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:36.871450 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:36.871710 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:36.871888 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:36.872064 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:36.954687 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:45:36.981528 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:45:37.008069 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:45:37.034582 1075954 provision.go:87] duration metric: took 429.176891ms to configureAuth
	I0318 12:45:37.034614 1075954 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:45:37.034784 1075954 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:37.034893 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.037849 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.038212 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.038266 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.038425 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.038654 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.038819 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.038926 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.039096 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:37.039299 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:37.039328 1075954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:45:37.322514 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:45:37.322547 1075954 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:45:37.322559 1075954 main.go:141] libmachine: (addons-106685) Calling .GetURL
	I0318 12:45:37.324094 1075954 main.go:141] libmachine: (addons-106685) DBG | Using libvirt version 6000000
	I0318 12:45:37.326652 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.327104 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.327131 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.327341 1075954 main.go:141] libmachine: Docker is up and running!
	I0318 12:45:37.327357 1075954 main.go:141] libmachine: Reticulating splines...
	I0318 12:45:37.327367 1075954 client.go:171] duration metric: took 26.217545276s to LocalClient.Create
	I0318 12:45:37.327405 1075954 start.go:167] duration metric: took 26.217620004s to libmachine.API.Create "addons-106685"
	I0318 12:45:37.327417 1075954 start.go:293] postStartSetup for "addons-106685" (driver="kvm2")
	I0318 12:45:37.327427 1075954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:45:37.327445 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.327718 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:45:37.327742 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.330171 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.330544 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.330585 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.330734 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.330945 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.331111 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.331256 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:37.415063 1075954 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:45:37.419941 1075954 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:45:37.419973 1075954 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 12:45:37.420073 1075954 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 12:45:37.420104 1075954 start.go:296] duration metric: took 92.681482ms for postStartSetup
	I0318 12:45:37.420148 1075954 main.go:141] libmachine: (addons-106685) Calling .GetConfigRaw
	I0318 12:45:37.420781 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:37.423622 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.424116 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.424150 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.424426 1075954 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/config.json ...
	I0318 12:45:37.424654 1075954 start.go:128] duration metric: took 26.334770448s to createHost
	I0318 12:45:37.424683 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.426995 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.427339 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.427378 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.427468 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.427671 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.427882 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.428024 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.428188 1075954 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:37.428412 1075954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0318 12:45:37.428428 1075954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:45:37.537153 1075954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765937.522700288
	
	I0318 12:45:37.537184 1075954 fix.go:216] guest clock: 1710765937.522700288
	I0318 12:45:37.537210 1075954 fix.go:229] Guest: 2024-03-18 12:45:37.522700288 +0000 UTC Remote: 2024-03-18 12:45:37.424668799 +0000 UTC m=+26.459204216 (delta=98.031489ms)
	I0318 12:45:37.537283 1075954 fix.go:200] guest clock delta is within tolerance: 98.031489ms
	I0318 12:45:37.537292 1075954 start.go:83] releasing machines lock for "addons-106685", held for 26.447533925s
	I0318 12:45:37.537322 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.537673 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:37.540401 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.540740 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.540774 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.540943 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.541446 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.541662 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:45:37.541773 1075954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:45:37.541844 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.541907 1075954 ssh_runner.go:195] Run: cat /version.json
	I0318 12:45:37.541931 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:45:37.544456 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.544745 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.544771 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.544792 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.544954 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.545162 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.545326 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:37.545347 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:37.545364 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.545497 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:45:37.545584 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:37.545673 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:45:37.545817 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:45:37.545970 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:45:37.646848 1075954 ssh_runner.go:195] Run: systemctl --version
	I0318 12:45:37.653170 1075954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:45:37.821719 1075954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:45:37.829581 1075954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:45:37.829665 1075954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:45:37.847432 1075954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:45:37.847476 1075954 start.go:494] detecting cgroup driver to use...
	I0318 12:45:37.847562 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:45:37.870207 1075954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:45:37.885705 1075954 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:45:37.885765 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:45:37.901549 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:45:37.916883 1075954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:45:38.039774 1075954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:45:38.215989 1075954 docker.go:233] disabling docker service ...
	I0318 12:45:38.216093 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:45:38.232133 1075954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:45:38.245407 1075954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:45:38.379181 1075954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:45:38.509113 1075954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:45:38.524157 1075954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:45:38.543882 1075954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:45:38.543961 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.554833 1075954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:45:38.554922 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.565763 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.576544 1075954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:38.587614 1075954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:45:38.598670 1075954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:45:38.608147 1075954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:45:38.608226 1075954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:45:38.622528 1075954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:45:38.632548 1075954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:38.755882 1075954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:45:38.907853 1075954 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:45:38.907972 1075954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:45:38.913656 1075954 start.go:562] Will wait 60s for crictl version
	I0318 12:45:38.913747 1075954 ssh_runner.go:195] Run: which crictl
	I0318 12:45:38.917820 1075954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:45:38.960367 1075954 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:45:38.960486 1075954 ssh_runner.go:195] Run: crio --version
	I0318 12:45:38.990349 1075954 ssh_runner.go:195] Run: crio --version
	I0318 12:45:39.025630 1075954 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:45:39.027096 1075954 main.go:141] libmachine: (addons-106685) Calling .GetIP
	I0318 12:45:39.029985 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:39.030296 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:45:39.030350 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:45:39.030527 1075954 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:45:39.034931 1075954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:39.047956 1075954 kubeadm.go:877] updating cluster {Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:45:39.048090 1075954 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:45:39.048146 1075954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:45:39.082992 1075954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 12:45:39.083081 1075954 ssh_runner.go:195] Run: which lz4
	I0318 12:45:39.087448 1075954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:45:39.091848 1075954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:45:39.091890 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 12:45:40.783677 1075954 crio.go:444] duration metric: took 1.696280623s to copy over tarball
	I0318 12:45:40.783786 1075954 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:45:43.492467 1075954 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.708639438s)
	I0318 12:45:43.492509 1075954 crio.go:451] duration metric: took 2.708788824s to extract the tarball
	I0318 12:45:43.492521 1075954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:45:43.535449 1075954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:45:43.576356 1075954 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:45:43.576382 1075954 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:45:43.576394 1075954 kubeadm.go:928] updating node { 192.168.39.205 8443 v1.28.4 crio true true} ...
	I0318 12:45:43.576506 1075954 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-106685 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:45:43.576600 1075954 ssh_runner.go:195] Run: crio config
	I0318 12:45:43.628071 1075954 cni.go:84] Creating CNI manager for ""
	I0318 12:45:43.628098 1075954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:43.628112 1075954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:45:43.628139 1075954 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-106685 NodeName:addons-106685 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:45:43.628309 1075954 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-106685"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:45:43.628376 1075954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:45:43.639013 1075954 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:45:43.639104 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:45:43.649537 1075954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 12:45:43.667444 1075954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:45:43.685194 1075954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0318 12:45:43.703472 1075954 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I0318 12:45:43.707976 1075954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:43.721502 1075954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:43.842694 1075954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:45:43.860185 1075954 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685 for IP: 192.168.39.205
	I0318 12:45:43.860216 1075954 certs.go:194] generating shared ca certs ...
	I0318 12:45:43.860236 1075954 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:43.860402 1075954 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 12:45:43.965932 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt ...
	I0318 12:45:43.965968 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt: {Name:mk5f9551de9c497d1c59382d38e79a61c6cfd7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:43.966185 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key ...
	I0318 12:45:43.966201 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key: {Name:mk41a0f707f6782a7d808da53e4fcdabcf550858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:43.966342 1075954 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 12:45:44.030624 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt ...
	I0318 12:45:44.030661 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt: {Name:mkd59dd5caba64aef304a4b13ca0d6338782347a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.030827 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key ...
	I0318 12:45:44.030839 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key: {Name:mk911f9d9682c437c92758b0616767e4bda773e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.030910 1075954 certs.go:256] generating profile certs ...
	I0318 12:45:44.030982 1075954 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.key
	I0318 12:45:44.030998 1075954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt with IP's: []
	I0318 12:45:44.350157 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt ...
	I0318 12:45:44.350193 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: {Name:mk0c2e9276cbcab9a530edc0a7cd4eec0d2a232b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.350354 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.key ...
	I0318 12:45:44.350365 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.key: {Name:mk2254574a4a4d0953d51ff29d16fe78ebb8c6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.350435 1075954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff
	I0318 12:45:44.350460 1075954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205]
	I0318 12:45:44.424645 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff ...
	I0318 12:45:44.424679 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff: {Name:mkff85023b3ecfdabda3962ce6116dea82c5da82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.424858 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff ...
	I0318 12:45:44.424872 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff: {Name:mkebd83009dd1139661d27893984c331ba1dfe2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.424948 1075954 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt.927156ff -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt
	I0318 12:45:44.425022 1075954 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key.927156ff -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key
	I0318 12:45:44.425066 1075954 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key
	I0318 12:45:44.425085 1075954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt with IP's: []
	I0318 12:45:44.672811 1075954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt ...
	I0318 12:45:44.672846 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt: {Name:mka7be3b25a4b14abe83604a5406042112834714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.673042 1075954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key ...
	I0318 12:45:44.673065 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key: {Name:mke2fdfc89764503c7adabc96fde8b082b491125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:44.673248 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 12:45:44.673288 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:45:44.673314 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:45:44.673340 1075954 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 12:45:44.673966 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:45:44.701672 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:45:44.727786 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:45:44.755257 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:45:44.783127 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 12:45:44.815084 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:45:44.845776 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:45:44.877105 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 12:45:44.909143 1075954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:45:44.941714 1075954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:45:44.961585 1075954 ssh_runner.go:195] Run: openssl version
	I0318 12:45:44.967933 1075954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:45:44.980360 1075954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:44.985579 1075954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:44.985639 1075954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:44.991938 1075954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:45:45.004221 1075954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:45:45.008958 1075954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:45:45.009016 1075954 kubeadm.go:391] StartCluster: {Name:addons-106685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-106685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:45.009127 1075954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:45:45.009193 1075954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:45:45.050728 1075954 cri.go:89] found id: ""
	I0318 12:45:45.050807 1075954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:45:45.062192 1075954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:45:45.073044 1075954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:45:45.083862 1075954 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:45:45.083885 1075954 kubeadm.go:156] found existing configuration files:
	
	I0318 12:45:45.083941 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:45:45.094595 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:45:45.094667 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:45:45.105062 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:45:45.116425 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:45:45.116510 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:45:45.127846 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:45:45.138231 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:45:45.138304 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:45:45.149189 1075954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:45:45.159556 1075954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:45:45.159622 1075954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:45:45.170708 1075954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:45:45.380937 1075954 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:45:55.021435 1075954 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:45:55.021522 1075954 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:45:55.021602 1075954 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:45:55.021717 1075954 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:45:55.021825 1075954 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:45:55.021931 1075954 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:45:55.023787 1075954 out.go:204]   - Generating certificates and keys ...
	I0318 12:45:55.023899 1075954 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:45:55.023985 1075954 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:45:55.024081 1075954 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:45:55.024203 1075954 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:45:55.024282 1075954 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:45:55.024352 1075954 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:45:55.024438 1075954 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:45:55.024606 1075954 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-106685 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0318 12:45:55.024700 1075954 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:45:55.024886 1075954 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-106685 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0318 12:45:55.024981 1075954 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:45:55.025080 1075954 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:45:55.025152 1075954 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:45:55.025214 1075954 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:45:55.025260 1075954 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:45:55.025308 1075954 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:45:55.025361 1075954 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:45:55.025426 1075954 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:45:55.025515 1075954 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:45:55.025596 1075954 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:45:55.027204 1075954 out.go:204]   - Booting up control plane ...
	I0318 12:45:55.027329 1075954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:45:55.027431 1075954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:45:55.027532 1075954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:45:55.027662 1075954 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:45:55.027793 1075954 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:45:55.027886 1075954 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:45:55.028043 1075954 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:45:55.028148 1075954 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002163 seconds
	I0318 12:45:55.028271 1075954 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:45:55.028419 1075954 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:45:55.028483 1075954 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:45:55.028676 1075954 kubeadm.go:309] [mark-control-plane] Marking the node addons-106685 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:45:55.028758 1075954 kubeadm.go:309] [bootstrap-token] Using token: cv7fgx.9rsgzbp5eibqd9vf
	I0318 12:45:55.030163 1075954 out.go:204]   - Configuring RBAC rules ...
	I0318 12:45:55.030292 1075954 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:45:55.030390 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:45:55.030576 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:45:55.030717 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:45:55.030841 1075954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:45:55.030944 1075954 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:45:55.031045 1075954 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:45:55.031085 1075954 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:45:55.031128 1075954 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:45:55.031134 1075954 kubeadm.go:309] 
	I0318 12:45:55.031186 1075954 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:45:55.031199 1075954 kubeadm.go:309] 
	I0318 12:45:55.031268 1075954 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:45:55.031275 1075954 kubeadm.go:309] 
	I0318 12:45:55.031316 1075954 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:45:55.031403 1075954 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:45:55.031475 1075954 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:45:55.031484 1075954 kubeadm.go:309] 
	I0318 12:45:55.031576 1075954 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:45:55.031596 1075954 kubeadm.go:309] 
	I0318 12:45:55.031675 1075954 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:45:55.031686 1075954 kubeadm.go:309] 
	I0318 12:45:55.031748 1075954 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:45:55.031838 1075954 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:45:55.031954 1075954 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:45:55.031968 1075954 kubeadm.go:309] 
	I0318 12:45:55.032066 1075954 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:45:55.032165 1075954 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:45:55.032175 1075954 kubeadm.go:309] 
	I0318 12:45:55.032288 1075954 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cv7fgx.9rsgzbp5eibqd9vf \
	I0318 12:45:55.032410 1075954 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 12:45:55.032439 1075954 kubeadm.go:309] 	--control-plane 
	I0318 12:45:55.032448 1075954 kubeadm.go:309] 
	I0318 12:45:55.032555 1075954 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:45:55.032563 1075954 kubeadm.go:309] 
	I0318 12:45:55.032629 1075954 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cv7fgx.9rsgzbp5eibqd9vf \
	I0318 12:45:55.032729 1075954 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 12:45:55.032741 1075954 cni.go:84] Creating CNI manager for ""
	I0318 12:45:55.032748 1075954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:55.035083 1075954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 12:45:55.036245 1075954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 12:45:55.103761 1075954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 12:45:55.163387 1075954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:45:55.163458 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:55.163518 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-106685 minikube.k8s.io/updated_at=2024_03_18T12_45_55_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=addons-106685 minikube.k8s.io/primary=true
	I0318 12:45:55.206033 1075954 ops.go:34] apiserver oom_adj: -16
	I0318 12:45:55.298781 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:55.799003 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:56.299315 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:56.799220 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:57.299355 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:57.799725 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:58.298886 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:58.799600 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:59.299091 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:45:59.799648 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:00.299723 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:00.799476 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:01.299164 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:01.799077 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:02.299358 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:02.798941 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:03.299852 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:03.799861 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:04.299807 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:04.799395 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:05.299430 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:05.799443 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:06.299627 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:06.798985 1075954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:46:06.897074 1075954 kubeadm.go:1107] duration metric: took 11.73367192s to wait for elevateKubeSystemPrivileges
	W0318 12:46:06.897146 1075954 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:46:06.897159 1075954 kubeadm.go:393] duration metric: took 21.888147741s to StartCluster
	I0318 12:46:06.897185 1075954 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:46:06.897333 1075954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:46:06.897835 1075954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:46:06.898119 1075954 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:46:06.900039 1075954 out.go:177] * Verifying Kubernetes components...
	I0318 12:46:06.898168 1075954 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 12:46:06.898133 1075954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:46:06.898360 1075954 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:46:06.901440 1075954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:46:06.901449 1075954 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-106685"
	I0318 12:46:06.901464 1075954 addons.go:69] Setting metrics-server=true in profile "addons-106685"
	I0318 12:46:06.901469 1075954 addons.go:69] Setting gcp-auth=true in profile "addons-106685"
	I0318 12:46:06.901487 1075954 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-106685"
	I0318 12:46:06.901510 1075954 mustload.go:65] Loading cluster: addons-106685
	I0318 12:46:06.901502 1075954 addons.go:69] Setting storage-provisioner=true in profile "addons-106685"
	I0318 12:46:06.901519 1075954 addons.go:234] Setting addon metrics-server=true in "addons-106685"
	I0318 12:46:06.901526 1075954 addons.go:69] Setting helm-tiller=true in profile "addons-106685"
	I0318 12:46:06.901533 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901442 1075954 addons.go:69] Setting yakd=true in profile "addons-106685"
	I0318 12:46:06.901541 1075954 addons.go:234] Setting addon storage-provisioner=true in "addons-106685"
	I0318 12:46:06.901547 1075954 addons.go:234] Setting addon helm-tiller=true in "addons-106685"
	I0318 12:46:06.901555 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901558 1075954 addons.go:234] Setting addon yakd=true in "addons-106685"
	I0318 12:46:06.901575 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901580 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901581 1075954 addons.go:69] Setting registry=true in profile "addons-106685"
	I0318 12:46:06.901600 1075954 addons.go:234] Setting addon registry=true in "addons-106685"
	I0318 12:46:06.901620 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901735 1075954 config.go:182] Loaded profile config "addons-106685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:46:06.901786 1075954 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-106685"
	I0318 12:46:06.901836 1075954 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-106685"
	I0318 12:46:06.901971 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902001 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902054 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902063 1075954 addons.go:69] Setting volumesnapshots=true in profile "addons-106685"
	I0318 12:46:06.902067 1075954 addons.go:69] Setting cloud-spanner=true in profile "addons-106685"
	I0318 12:46:06.902086 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902099 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902099 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902106 1075954 addons.go:234] Setting addon volumesnapshots=true in "addons-106685"
	I0318 12:46:06.902120 1075954 addons.go:234] Setting addon cloud-spanner=true in "addons-106685"
	I0318 12:46:06.902124 1075954 addons.go:69] Setting ingress-dns=true in profile "addons-106685"
	I0318 12:46:06.902139 1075954 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-106685"
	I0318 12:46:06.902161 1075954 addons.go:69] Setting default-storageclass=true in profile "addons-106685"
	I0318 12:46:06.902192 1075954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-106685"
	I0318 12:46:06.902196 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902203 1075954 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-106685"
	I0318 12:46:06.902231 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.901575 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.901511 1075954 addons.go:69] Setting ingress=true in profile "addons-106685"
	I0318 12:46:06.902289 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902312 1075954 addons.go:234] Setting addon ingress=true in "addons-106685"
	I0318 12:46:06.902319 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902120 1075954 addons.go:69] Setting inspektor-gadget=true in profile "addons-106685"
	I0318 12:46:06.902391 1075954 addons.go:234] Setting addon inspektor-gadget=true in "addons-106685"
	I0318 12:46:06.902124 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902422 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902467 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902150 1075954 addons.go:234] Setting addon ingress-dns=true in "addons-106685"
	I0318 12:46:06.902540 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902571 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902579 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902144 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902256 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902231 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902794 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902816 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902853 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.902803 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.902909 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.902944 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903018 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903046 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903152 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.903178 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903200 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903274 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903294 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.903333 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.903372 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.923132 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0318 12:46:06.923152 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 12:46:06.923135 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0318 12:46:06.923140 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0318 12:46:06.923895 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.923944 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.924009 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.924462 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.924472 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.924488 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.924637 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.924660 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.924774 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.924797 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.924983 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.925004 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.925069 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925207 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925271 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925319 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.925831 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.925878 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.926467 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.926495 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.926677 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.926718 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.927241 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.927267 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.934199 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0318 12:46:06.934729 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.935594 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.935615 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.936223 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.936553 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.941070 1075954 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-106685"
	I0318 12:46:06.941128 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.941542 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.941584 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.944147 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.944207 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.946315 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I0318 12:46:06.948809 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0318 12:46:06.949245 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.949490 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.949964 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.949984 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.950221 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.950251 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.950633 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.950872 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.950988 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.951478 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.951514 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.954027 1075954 addons.go:234] Setting addon default-storageclass=true in "addons-106685"
	I0318 12:46:06.954079 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.954452 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.954507 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.957150 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0318 12:46:06.957828 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.958562 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.958596 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.959114 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.966371 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0318 12:46:06.966970 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.967092 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.967633 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.967667 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.968158 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.968767 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.968813 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.969053 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0318 12:46:06.969523 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.969766 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.970064 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.970085 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.972253 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:46:06.970711 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.974126 1075954 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:46:06.974142 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:46:06.974168 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.974460 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.975274 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0318 12:46:06.975387 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0318 12:46:06.975728 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.976369 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.976439 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.976451 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.976908 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.976956 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:06.977412 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.977425 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.977464 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.978125 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0318 12:46:06.978528 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.979056 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.979080 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.979228 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.979240 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.979600 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.979807 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.980006 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.980051 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.980061 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.980714 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:06.980737 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0318 12:46:06.980714 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:06.980777 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.981242 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.981284 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.981502 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:06.981711 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:06.982061 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0318 12:46:06.982241 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:06.982555 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.982579 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.982556 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.984889 1075954 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 12:46:06.984203 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.984206 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.984864 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I0318 12:46:06.986436 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 12:46:06.986491 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.987876 1075954 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 12:46:06.986507 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.987892 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 12:46:06.987916 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.986471 1075954 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 12:46:06.987333 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.989439 1075954 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 12:46:06.988270 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.989454 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 12:46:06.989477 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.988398 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.990237 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
	I0318 12:46:06.990337 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.990357 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.990372 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.990421 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:06.990829 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.991367 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.991676 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:06.991693 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:06.992213 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.992235 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.992623 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.994243 1075954 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 12:46:06.993038 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:06.994723 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38877
	I0318 12:46:06.995029 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:06.995707 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 12:46:06.995929 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 12:46:06.995961 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.997451 1075954 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 12:46:06.996540 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:06.996574 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.997715 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.997718 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:06.998752 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:06.998813 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:06.998881 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:06.998903 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.999004 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:06.999006 1075954 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 12:46:06.999068 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 12:46:06.999074 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:06.999083 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:06.999094 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:06.998336 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:06.999116 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:06.999231 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:06.999581 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:06.999766 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:06.999938 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.000331 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.000359 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.001289 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.002185 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.002280 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0318 12:46:07.002502 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.002602 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.002641 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.002712 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.002934 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.003114 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.003500 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.003512 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.003617 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.003724 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.003929 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.004090 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.004206 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.004254 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.004242 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.005314 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.005952 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.005971 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.006438 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.007109 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.007149 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.009674 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0318 12:46:07.010235 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.010865 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.010882 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.011317 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.011998 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.012047 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.015106 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0318 12:46:07.015679 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.015940 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I0318 12:46:07.016325 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.016345 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.016756 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.017005 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.019682 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
	I0318 12:46:07.020274 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.020328 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.020937 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.020963 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.021458 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.021517 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0318 12:46:07.021847 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.021870 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.022252 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.022298 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.022395 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.022633 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.023209 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.023915 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.023934 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.024406 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.025230 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:07.025285 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:07.025677 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.027971 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 12:46:07.026277 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0318 12:46:07.031203 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 12:46:07.030244 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.030499 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0318 12:46:07.032695 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 12:46:07.034172 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 12:46:07.033674 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.033966 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.035729 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 12:46:07.035802 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.036575 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.037270 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.037331 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 12:46:07.037791 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.038305 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.039391 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 12:46:07.039598 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.040035 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0318 12:46:07.040377 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.040668 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0318 12:46:07.040907 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 12:46:07.042199 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 12:46:07.041363 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.041416 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.042220 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 12:46:07.042451 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.042786 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0318 12:46:07.043344 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.043663 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.043686 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.043753 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.044084 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.044217 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.044227 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.046058 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 12:46:07.044675 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.044836 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.045314 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.045766 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.046433 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.047419 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.047515 1075954 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 12:46:07.047623 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.048630 1075954 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0318 12:46:07.047628 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.048691 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.048711 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 12:46:07.048738 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0318 12:46:07.048911 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.049156 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.050046 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.050072 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.050079 1075954 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 12:46:07.050091 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 12:46:07.050181 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.050948 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.050960 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.050987 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.051033 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0318 12:46:07.051611 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I0318 12:46:07.051667 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.051681 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.053232 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 12:46:07.051995 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.052366 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:07.053046 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.052330 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.053861 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.055094 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.055103 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:46:07.056626 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:46:07.055116 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.055132 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.055313 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.055606 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.055735 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.056024 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:07.057612 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.058085 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.060571 1075954 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 12:46:07.058359 1075954 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 12:46:07.060604 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 12:46:07.060625 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.058539 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.058659 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.059349 1075954 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 12:46:07.059373 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:07.059381 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.059398 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.059498 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.059523 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.060899 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.061027 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.062178 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 12:46:07.062192 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.062386 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.062388 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.062746 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:07.063523 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 12:46:07.063538 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 12:46:07.063546 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.063554 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 12:46:07.063571 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.063575 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.063708 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:07.063903 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.064426 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.064445 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.064652 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.064846 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.065031 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.065175 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.066196 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.066678 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.068662 1075954 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 12:46:07.070528 1075954 out.go:177]   - Using image docker.io/busybox:stable
	I0318 12:46:07.067772 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.069055 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:07.069103 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.069377 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.070018 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.070056 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.070617 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.072102 1075954 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 12:46:07.072120 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 12:46:07.072127 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.072134 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.070633 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.072152 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.070742 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.073889 1075954 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 12:46:07.070867 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.070899 1075954 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:46:07.072341 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.075334 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:46:07.075364 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.075425 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 12:46:07.075444 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 12:46:07.075464 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:07.075557 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.075869 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.076174 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.076201 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.076285 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.076474 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.076678 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.077438 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.077659 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.077853 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.079418 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.079840 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.079871 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.079926 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.080108 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.080317 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.080393 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:07.080444 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:07.080501 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.080541 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:07.080613 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:07.080743 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:07.080848 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:07.080987 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	W0318 12:46:07.083206 1075954 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57896->192.168.39.205:22: read: connection reset by peer
	I0318 12:46:07.083239 1075954 retry.go:31] will retry after 229.118256ms: ssh: handshake failed: read tcp 192.168.39.1:57896->192.168.39.205:22: read: connection reset by peer
	I0318 12:46:07.243888 1075954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:46:07.435554 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 12:46:07.549885 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 12:46:07.572042 1075954 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 12:46:07.572074 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 12:46:07.574032 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 12:46:07.574053 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 12:46:07.575093 1075954 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 12:46:07.575107 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 12:46:07.658566 1075954 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 12:46:07.658597 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 12:46:07.659135 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 12:46:07.703163 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:46:07.727752 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 12:46:07.727784 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 12:46:07.729400 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 12:46:07.742915 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 12:46:07.742957 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 12:46:07.772679 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:46:07.773209 1075954 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 12:46:07.773240 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 12:46:07.829030 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 12:46:07.858865 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 12:46:07.858905 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 12:46:07.868511 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 12:46:07.868548 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 12:46:07.943078 1075954 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 12:46:07.943119 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 12:46:07.956841 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 12:46:07.956883 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 12:46:07.969623 1075954 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 12:46:07.969655 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 12:46:07.985759 1075954 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.084291114s)
	I0318 12:46:07.985962 1075954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:46:08.004731 1075954 node_ready.go:35] waiting up to 6m0s for node "addons-106685" to be "Ready" ...
	I0318 12:46:08.008696 1075954 node_ready.go:49] node "addons-106685" has status "Ready":"True"
	I0318 12:46:08.008735 1075954 node_ready.go:38] duration metric: took 3.970703ms for node "addons-106685" to be "Ready" ...
	I0318 12:46:08.008749 1075954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:08.017461 1075954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:08.084509 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 12:46:08.085821 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 12:46:08.085857 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 12:46:08.092738 1075954 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 12:46:08.092770 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 12:46:08.150853 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 12:46:08.157608 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 12:46:08.157654 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 12:46:08.197687 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 12:46:08.197730 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 12:46:08.208581 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 12:46:08.208630 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 12:46:08.308143 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 12:46:08.308179 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 12:46:08.344572 1075954 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 12:46:08.344598 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 12:46:08.446574 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 12:46:08.446607 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 12:46:08.468585 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 12:46:08.468620 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 12:46:08.492557 1075954 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 12:46:08.492586 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 12:46:08.716550 1075954 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:46:08.716574 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 12:46:08.761077 1075954 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 12:46:08.761110 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 12:46:08.767444 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 12:46:08.939598 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 12:46:08.982206 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:46:09.027486 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 12:46:09.027516 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 12:46:09.252532 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 12:46:09.252561 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 12:46:09.347238 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 12:46:09.347274 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 12:46:09.679245 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 12:46:09.679285 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 12:46:09.767958 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 12:46:09.767998 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 12:46:10.034957 1075954 pod_ready.go:102] pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace has status "Ready":"False"
	I0318 12:46:10.113346 1075954 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 12:46:10.113376 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 12:46:10.245194 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 12:46:10.245221 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 12:46:10.438070 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 12:46:10.626249 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 12:46:10.626275 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 12:46:10.979251 1075954 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 12:46:10.979282 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 12:46:11.382457 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 12:46:11.525767 1075954 pod_ready.go:92] pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.525803 1075954 pod_ready.go:81] duration metric: took 3.508298536s for pod "coredns-5dd5756b68-fgjhz" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.525819 1075954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qf446" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.538642 1075954 pod_ready.go:92] pod "coredns-5dd5756b68-qf446" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.538678 1075954 pod_ready.go:81] duration metric: took 12.84949ms for pod "coredns-5dd5756b68-qf446" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.538693 1075954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.560353 1075954 pod_ready.go:92] pod "etcd-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.560390 1075954 pod_ready.go:81] duration metric: took 21.686991ms for pod "etcd-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.560406 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.585311 1075954 pod_ready.go:92] pod "kube-apiserver-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.585346 1075954 pod_ready.go:81] duration metric: took 24.929677ms for pod "kube-apiserver-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.585360 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.629433 1075954 pod_ready.go:92] pod "kube-controller-manager-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.629473 1075954 pod_ready.go:81] duration metric: took 44.101027ms for pod "kube-controller-manager-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.629488 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ll74j" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.921536 1075954 pod_ready.go:92] pod "kube-proxy-ll74j" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:11.921565 1075954 pod_ready.go:81] duration metric: took 292.067694ms for pod "kube-proxy-ll74j" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:11.921579 1075954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:12.321727 1075954 pod_ready.go:92] pod "kube-scheduler-addons-106685" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:12.321762 1075954 pod_ready.go:81] duration metric: took 400.174287ms for pod "kube-scheduler-addons-106685" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:12.321774 1075954 pod_ready.go:38] duration metric: took 4.313009788s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:12.321791 1075954 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:46:12.321844 1075954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:46:13.632535 1075954 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 12:46:13.632585 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:13.636480 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:13.637101 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:13.637131 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:13.637307 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:13.637542 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:13.637744 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:13.637887 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:14.143384 1075954 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 12:46:14.445884 1075954 addons.go:234] Setting addon gcp-auth=true in "addons-106685"
	I0318 12:46:14.445963 1075954 host.go:66] Checking if "addons-106685" exists ...
	I0318 12:46:14.446307 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:14.446340 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:14.464016 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0318 12:46:14.464659 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:14.465247 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:14.465275 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:14.465672 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:14.466185 1075954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:46:14.466215 1075954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:46:14.482816 1075954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44055
	I0318 12:46:14.483360 1075954 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:46:14.483892 1075954 main.go:141] libmachine: Using API Version  1
	I0318 12:46:14.483915 1075954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:46:14.484304 1075954 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:46:14.484523 1075954 main.go:141] libmachine: (addons-106685) Calling .GetState
	I0318 12:46:14.486284 1075954 main.go:141] libmachine: (addons-106685) Calling .DriverName
	I0318 12:46:14.486574 1075954 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 12:46:14.486607 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHHostname
	I0318 12:46:14.489668 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:14.490182 1075954 main.go:141] libmachine: (addons-106685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:c4:53", ip: ""} in network mk-addons-106685: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:27 +0000 UTC Type:0 Mac:52:54:00:ae:c4:53 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-106685 Clientid:01:52:54:00:ae:c4:53}
	I0318 12:46:14.490216 1075954 main.go:141] libmachine: (addons-106685) DBG | domain addons-106685 has defined IP address 192.168.39.205 and MAC address 52:54:00:ae:c4:53 in network mk-addons-106685
	I0318 12:46:14.490531 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHPort
	I0318 12:46:14.490720 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHKeyPath
	I0318 12:46:14.490927 1075954 main.go:141] libmachine: (addons-106685) Calling .GetSSHUsername
	I0318 12:46:14.491117 1075954 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/addons-106685/id_rsa Username:docker}
	I0318 12:46:18.379950 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.830012297s)
	I0318 12:46:18.380022 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380038 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380055 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.720888293s)
	I0318 12:46:18.380093 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.944497327s)
	I0318 12:46:18.380105 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380197 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380210 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.650787889s)
	I0318 12:46:18.380172 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.676973345s)
	I0318 12:46:18.380258 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380268 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380291 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.607567859s)
	I0318 12:46:18.380320 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380332 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380423 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.551359195s)
	I0318 12:46:18.380175 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380464 1075954 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.394480882s)
	I0318 12:46:18.380480 1075954 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 12:46:18.380510 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380522 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380523 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380525 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.380537 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380551 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380553 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380559 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380561 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380568 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380576 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380596 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.296040897s)
	I0318 12:46:18.380531 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380621 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380625 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380636 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380690 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.229803061s)
	I0318 12:46:18.380715 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380726 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380770 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.61328885s)
	I0318 12:46:18.380790 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.380796 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380814 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380827 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.380847 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.380854 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380862 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380869 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380928 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.441296322s)
	I0318 12:46:18.380949 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.380959 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380976 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.381000 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.381088 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.398845746s)
	W0318 12:46:18.381129 1075954 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 12:46:18.381155 1075954 retry.go:31] will retry after 229.259896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 12:46:18.381228 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.943125188s)
	I0318 12:46:18.381246 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.381256 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.381328 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.381353 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.381360 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.380485 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380239 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.381822 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380529 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.383276 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.383354 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.383380 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.383398 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.383415 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.383889 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.383953 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.383972 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384190 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384249 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.384267 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384284 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384311 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.384602 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384613 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384690 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.384708 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384717 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384743 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384764 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384781 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.384784 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.384803 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.384817 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384828 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.380440 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.384941 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.384995 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.384696 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385055 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.385104 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385125 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.385145 1075954 addons.go:470] Verifying addon registry=true in "addons-106685"
	I0318 12:46:18.388280 1075954 out.go:177] * Verifying registry addon...
	I0318 12:46:18.384767 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385333 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385378 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385398 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385405 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385425 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385442 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.385475 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.385506 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.387742 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.387764 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.387793 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.387913 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.389589 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.390557 1075954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 12:46:18.390836 1075954 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-106685 service yakd-dashboard -n yakd-dashboard
	
	I0318 12:46:18.390858 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.390862 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392012 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392025 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.390866 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.390878 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392116 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392128 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.390882 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392156 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392165 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.390897 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392093 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.392251 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.392605 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392615 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392623 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392636 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392636 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392640 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392651 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.392651 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392664 1075954 addons.go:470] Verifying addon metrics-server=true in "addons-106685"
	I0318 12:46:18.392676 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392683 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.392692 1075954 addons.go:470] Verifying addon ingress=true in "addons-106685"
	I0318 12:46:18.392699 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.392709 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.394189 1075954 out.go:177] * Verifying ingress addon...
	I0318 12:46:18.396325 1075954 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 12:46:18.443375 1075954 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 12:46:18.443408 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:18.457772 1075954 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 12:46:18.457802 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:18.502150 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.502172 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.502585 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.502635 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.502643 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	W0318 12:46:18.502767 1075954 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0318 12:46:18.519285 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:18.519321 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:18.519782 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:18.519819 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:18.519844 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:18.610652 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:46:18.884782 1075954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-106685" context rescaled to 1 replicas
	I0318 12:46:18.897776 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:18.901295 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:19.687469 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:19.694178 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:19.747528 1075954 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.425658268s)
	I0318 12:46:19.747580 1075954 api_server.go:72] duration metric: took 12.849419266s to wait for apiserver process to appear ...
	I0318 12:46:19.747588 1075954 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:46:19.747618 1075954 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0318 12:46:19.747616 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.365090365s)
	I0318 12:46:19.747645 1075954 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.261043009s)
	I0318 12:46:19.747669 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:19.747686 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:19.749794 1075954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:46:19.748083 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:19.748142 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:19.751312 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:19.752782 1075954 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 12:46:19.751343 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:19.754394 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:19.754457 1075954 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 12:46:19.754479 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0318 12:46:19.754741 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:19.754780 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:19.754786 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:19.754797 1075954 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-106685"
	I0318 12:46:19.756579 1075954 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 12:46:19.759287 1075954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 12:46:19.819538 1075954 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 12:46:19.819572 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:19.823159 1075954 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0318 12:46:19.835733 1075954 api_server.go:141] control plane version: v1.28.4
	I0318 12:46:19.835773 1075954 api_server.go:131] duration metric: took 88.177164ms to wait for apiserver health ...
	I0318 12:46:19.835782 1075954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:46:19.878774 1075954 system_pods.go:59] 19 kube-system pods found
	I0318 12:46:19.878815 1075954 system_pods.go:61] "coredns-5dd5756b68-fgjhz" [d2fa8bcb-a39b-4837-b965-4cbf558cf890] Running
	I0318 12:46:19.878822 1075954 system_pods.go:61] "coredns-5dd5756b68-qf446" [79feb7b9-b1c9-42a6-adbb-324e45aa35ec] Running
	I0318 12:46:19.878831 1075954 system_pods.go:61] "csi-hostpath-attacher-0" [8500ab8e-1f4b-4d6c-8ea7-183a45765ccd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 12:46:19.878837 1075954 system_pods.go:61] "csi-hostpath-resizer-0" [0a1779b3-86ae-429b-9ea1-11ea3b7dd11f] Pending
	I0318 12:46:19.878846 1075954 system_pods.go:61] "csi-hostpathplugin-tdddd" [683115f5-0641-4123-81af-970fe5185bbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 12:46:19.878852 1075954 system_pods.go:61] "etcd-addons-106685" [eae56be1-5d4d-470f-b176-c6382f70f80d] Running
	I0318 12:46:19.878858 1075954 system_pods.go:61] "kube-apiserver-addons-106685" [3f02b47b-2644-4acd-a455-71779192f951] Running
	I0318 12:46:19.878862 1075954 system_pods.go:61] "kube-controller-manager-addons-106685" [8ea59361-4e78-4978-b47a-cf380d4098c7] Running
	I0318 12:46:19.878870 1075954 system_pods.go:61] "kube-ingress-dns-minikube" [b2c4ec5a-1796-470c-b324-7c018ab2799d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 12:46:19.878875 1075954 system_pods.go:61] "kube-proxy-ll74j" [5d5816ef-f9cb-492d-a933-16308c544452] Running
	I0318 12:46:19.878882 1075954 system_pods.go:61] "kube-scheduler-addons-106685" [996af90e-7a6e-4814-ba1a-55cabcc82da0] Running
	I0318 12:46:19.878891 1075954 system_pods.go:61] "metrics-server-69cf46c98-b9sd4" [ef2ad747-2bac-41dc-9aa5-96fa6e675413] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 12:46:19.878903 1075954 system_pods.go:61] "nvidia-device-plugin-daemonset-rgg96" [375e6fa2-ca11-40df-b093-1c93e6401092] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0318 12:46:19.878913 1075954 system_pods.go:61] "registry-proxy-j97lj" [8ea57f10-a30d-4291-9636-1e99d163e226] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 12:46:19.878923 1075954 system_pods.go:61] "registry-vw2h8" [de58d932-6f78-479f-9d49-55619fa3881a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 12:46:19.878933 1075954 system_pods.go:61] "snapshot-controller-58dbcc7b99-2gcn9" [1095c47c-fd36-43a0-94f7-f1aae5fe1090] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.878949 1075954 system_pods.go:61] "snapshot-controller-58dbcc7b99-5vtqp" [eaf7d472-3dbe-449f-b580-e851b86a5850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.878955 1075954 system_pods.go:61] "storage-provisioner" [08aa63b8-ea35-4443-b7d6-fd52b4de2b95] Running
	I0318 12:46:19.878964 1075954 system_pods.go:61] "tiller-deploy-7b677967b9-599zv" [bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 12:46:19.878974 1075954 system_pods.go:74] duration metric: took 43.18332ms to wait for pod list to return data ...
	I0318 12:46:19.878987 1075954 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:46:19.913404 1075954 default_sa.go:45] found service account: "default"
	I0318 12:46:19.913445 1075954 default_sa.go:55] duration metric: took 34.442177ms for default service account to be created ...
	I0318 12:46:19.913459 1075954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:46:19.934364 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:19.938716 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:19.959882 1075954 system_pods.go:86] 19 kube-system pods found
	I0318 12:46:19.959920 1075954 system_pods.go:89] "coredns-5dd5756b68-fgjhz" [d2fa8bcb-a39b-4837-b965-4cbf558cf890] Running
	I0318 12:46:19.959926 1075954 system_pods.go:89] "coredns-5dd5756b68-qf446" [79feb7b9-b1c9-42a6-adbb-324e45aa35ec] Running
	I0318 12:46:19.959934 1075954 system_pods.go:89] "csi-hostpath-attacher-0" [8500ab8e-1f4b-4d6c-8ea7-183a45765ccd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 12:46:19.959943 1075954 system_pods.go:89] "csi-hostpath-resizer-0" [0a1779b3-86ae-429b-9ea1-11ea3b7dd11f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 12:46:19.959952 1075954 system_pods.go:89] "csi-hostpathplugin-tdddd" [683115f5-0641-4123-81af-970fe5185bbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 12:46:19.959958 1075954 system_pods.go:89] "etcd-addons-106685" [eae56be1-5d4d-470f-b176-c6382f70f80d] Running
	I0318 12:46:19.959963 1075954 system_pods.go:89] "kube-apiserver-addons-106685" [3f02b47b-2644-4acd-a455-71779192f951] Running
	I0318 12:46:19.959968 1075954 system_pods.go:89] "kube-controller-manager-addons-106685" [8ea59361-4e78-4978-b47a-cf380d4098c7] Running
	I0318 12:46:19.959974 1075954 system_pods.go:89] "kube-ingress-dns-minikube" [b2c4ec5a-1796-470c-b324-7c018ab2799d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 12:46:19.959979 1075954 system_pods.go:89] "kube-proxy-ll74j" [5d5816ef-f9cb-492d-a933-16308c544452] Running
	I0318 12:46:19.959984 1075954 system_pods.go:89] "kube-scheduler-addons-106685" [996af90e-7a6e-4814-ba1a-55cabcc82da0] Running
	I0318 12:46:19.959993 1075954 system_pods.go:89] "metrics-server-69cf46c98-b9sd4" [ef2ad747-2bac-41dc-9aa5-96fa6e675413] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 12:46:19.960000 1075954 system_pods.go:89] "nvidia-device-plugin-daemonset-rgg96" [375e6fa2-ca11-40df-b093-1c93e6401092] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0318 12:46:19.960008 1075954 system_pods.go:89] "registry-proxy-j97lj" [8ea57f10-a30d-4291-9636-1e99d163e226] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 12:46:19.960015 1075954 system_pods.go:89] "registry-vw2h8" [de58d932-6f78-479f-9d49-55619fa3881a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 12:46:19.960024 1075954 system_pods.go:89] "snapshot-controller-58dbcc7b99-2gcn9" [1095c47c-fd36-43a0-94f7-f1aae5fe1090] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.960034 1075954 system_pods.go:89] "snapshot-controller-58dbcc7b99-5vtqp" [eaf7d472-3dbe-449f-b580-e851b86a5850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:46:19.960038 1075954 system_pods.go:89] "storage-provisioner" [08aa63b8-ea35-4443-b7d6-fd52b4de2b95] Running
	I0318 12:46:19.960045 1075954 system_pods.go:89] "tiller-deploy-7b677967b9-599zv" [bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 12:46:19.960053 1075954 system_pods.go:126] duration metric: took 46.586843ms to wait for k8s-apps to be running ...
	I0318 12:46:19.960063 1075954 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:46:19.960117 1075954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:46:19.961095 1075954 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 12:46:19.961119 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 12:46:20.110368 1075954 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 12:46:20.110404 1075954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 12:46:20.280613 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:20.300520 1075954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 12:46:20.408356 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:20.408411 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:20.769296 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:20.896953 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:20.901108 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:21.272778 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:21.396453 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:21.402515 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:21.657571 1075954 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.697419004s)
	I0318 12:46:21.657630 1075954 system_svc.go:56] duration metric: took 1.697561258s WaitForService to wait for kubelet
	I0318 12:46:21.657644 1075954 kubeadm.go:576] duration metric: took 14.759483192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:46:21.657684 1075954 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:46:21.657571 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.046856093s)
	I0318 12:46:21.657751 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:21.657771 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:21.658211 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:21.658231 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:21.658242 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:21.658251 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:21.658554 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:21.658586 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:21.661523 1075954 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:21.661547 1075954 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:21.661557 1075954 node_conditions.go:105] duration metric: took 3.866392ms to run NodePressure ...
	I0318 12:46:21.661570 1075954 start.go:240] waiting for startup goroutines ...
	I0318 12:46:21.765868 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:21.923647 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:21.924298 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:22.223330 1075954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.922757305s)
	I0318 12:46:22.223407 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:22.223426 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:22.223767 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:22.223791 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:22.223803 1075954 main.go:141] libmachine: Making call to close driver server
	I0318 12:46:22.223816 1075954 main.go:141] libmachine: (addons-106685) Calling .Close
	I0318 12:46:22.224358 1075954 main.go:141] libmachine: (addons-106685) DBG | Closing plugin on server side
	I0318 12:46:22.224429 1075954 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:46:22.224456 1075954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:46:22.226126 1075954 addons.go:470] Verifying addon gcp-auth=true in "addons-106685"
	I0318 12:46:22.227789 1075954 out.go:177] * Verifying gcp-auth addon...
	I0318 12:46:22.229680 1075954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 12:46:22.251029 1075954 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 12:46:22.251056 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:22.291112 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:22.397387 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:22.402173 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:22.733675 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:22.766918 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:22.897535 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:22.903300 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:23.234603 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:23.265740 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:23.396388 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:23.399920 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:23.734510 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:23.766354 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:23.897511 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:23.901469 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:24.233783 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:24.266455 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:24.400102 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:24.406598 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:24.734194 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:24.765642 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:25.149129 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:25.152770 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:25.234683 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:25.265489 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:25.406624 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:25.410916 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:25.734255 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:25.768671 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:25.896766 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:25.900830 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:26.234037 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:26.266985 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:26.396586 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:26.400823 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:26.738259 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:26.766162 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:26.896433 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:26.900674 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:27.235538 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:27.265696 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:27.395913 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:27.400548 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:27.734360 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:27.765888 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:27.898798 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:27.900728 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:28.237197 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:28.265749 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:28.397929 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:28.400947 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:28.734621 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:28.765218 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:28.895970 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:28.900946 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:29.234437 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:29.266441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:29.396428 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:29.400645 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:29.733980 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:29.767713 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:29.896255 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:29.900269 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:30.233951 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:30.265110 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:30.398717 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:30.401856 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:30.735097 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:30.765672 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:30.895944 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:30.900482 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:31.234506 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:31.266477 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:31.396764 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:31.400493 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:31.733833 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:31.765340 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:31.896259 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:31.900197 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:32.233893 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:32.267250 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:32.396277 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:32.400542 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:32.734510 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:32.765912 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:32.896061 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:32.899988 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:33.434272 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:33.437913 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:33.440588 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:33.442078 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:33.733252 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:33.766013 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:33.895936 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:33.899901 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:34.234441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:34.266620 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:34.396926 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:34.400709 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:34.733748 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:34.769393 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:34.899723 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:34.906693 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:35.234351 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:35.266659 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:35.396166 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:35.400730 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:35.734326 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:35.766432 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:35.896866 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:35.902674 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:36.235914 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:36.265887 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:36.406281 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:36.406336 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:36.735090 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:36.767306 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:36.896404 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:36.901347 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:37.234345 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:37.267207 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:37.396601 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:37.400698 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:37.734272 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:37.766347 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:37.905377 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:37.915429 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:38.234006 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:38.265262 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:38.396609 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:38.401505 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:38.733806 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:38.765335 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:38.896351 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:38.900423 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:39.234569 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:39.266592 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:39.395822 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:39.400084 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:39.733806 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:39.765849 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:39.895908 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:39.900315 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:40.234096 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:40.267143 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:40.399422 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:40.403279 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:40.733430 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:40.768449 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:40.896919 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:40.900159 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:41.234405 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:41.268875 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:41.396581 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:41.400324 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:41.735608 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:41.768957 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:41.896730 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:46:41.902078 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:42.235048 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:42.265938 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:42.397297 1075954 kapi.go:107] duration metric: took 24.006738369s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 12:46:42.401026 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:42.735385 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:42.772691 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:42.901268 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:43.234535 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:43.266711 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:43.401386 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:43.734103 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:43.767479 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:43.901912 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:44.234383 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:44.271009 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:44.401600 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:44.734186 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:44.766086 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:44.901535 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:45.234993 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:45.264979 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:45.401718 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:45.734382 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:45.766649 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:45.902136 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:46.234292 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:46.265936 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:46.402017 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:46.734843 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:46.766327 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:46.901278 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:47.234780 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:47.265742 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:47.401805 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:47.734064 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:47.765851 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:47.901844 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:48.235009 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:48.269216 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:48.401460 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:48.734563 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:48.768515 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:48.901178 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:49.235041 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:49.273374 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:49.402102 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:49.807307 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:49.809282 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:49.902669 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:50.234466 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:50.266941 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:50.401774 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:50.734374 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:50.766305 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:50.902393 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:51.234752 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:51.266106 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:51.400932 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:51.734756 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:51.765617 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:51.901116 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:52.234557 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:52.266799 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:52.404203 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:52.734284 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:52.768764 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:52.901685 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:53.233866 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:53.265802 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:53.401977 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:53.734584 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:53.766100 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:53.901876 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:54.234758 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:54.265214 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:54.403662 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:54.735154 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:54.765961 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:54.906905 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:55.234653 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:55.266082 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:55.401730 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:55.737005 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:56.050924 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:56.055919 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:56.235286 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:56.266413 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:56.404431 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:56.733888 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:56.766837 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:56.902085 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:57.234694 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:57.268277 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:57.402084 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:57.734915 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:57.766708 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:57.903553 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:58.236691 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:58.264921 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:58.402421 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:58.736987 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:58.806466 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:58.906543 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:59.239025 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:59.265666 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:59.401696 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:46:59.734354 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:46:59.766190 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:46:59.905170 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:00.234149 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:00.265755 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:00.401795 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:00.733867 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:00.766396 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:00.900975 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:01.234892 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:01.265415 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:01.401297 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:01.733618 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:01.766363 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:01.901204 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:02.576369 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:02.577091 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:02.577512 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:02.734599 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:02.766568 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:02.901423 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:03.235787 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:03.265296 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:03.402031 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:03.736456 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:03.766802 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:03.901985 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:04.234786 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:04.273034 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:04.402530 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:04.734242 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:04.769605 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:04.902175 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:05.234217 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:05.266002 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:05.402103 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:05.735174 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:05.767126 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:05.903135 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:06.235521 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:06.265978 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:06.402417 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:06.739441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:06.766139 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:06.901015 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:07.233769 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:07.266710 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:07.401582 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:07.734219 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:07.765472 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:07.901622 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:08.234023 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:08.266081 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:08.401398 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:08.733600 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:08.765001 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:08.901866 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:09.236997 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:09.266119 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:09.404625 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:09.733890 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:09.765206 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:09.901487 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:10.242534 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:10.267010 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:10.402106 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:10.733984 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:10.771589 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:10.904171 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:11.235645 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:11.267365 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:11.404794 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:11.736477 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:11.765394 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:11.901141 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:12.234822 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:12.265926 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:12.402358 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:12.733722 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:12.765246 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:12.900916 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:13.235312 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:13.266224 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:13.404218 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:13.760490 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:13.771444 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:13.901956 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:14.234358 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:14.266946 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:14.402000 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:14.733864 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:14.765456 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:14.902644 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:15.233797 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:15.265220 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:15.404240 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:15.734353 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:15.766090 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:15.902343 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:16.234450 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:16.265797 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:16.401966 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:16.736108 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:16.768701 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:17.304540 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:17.305361 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:17.309185 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:17.402157 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:17.735010 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:17.767041 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:17.903823 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:18.235744 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:18.266072 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:18.401693 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:18.739766 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:18.766351 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:18.905363 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:19.235384 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:19.266127 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:19.401465 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:19.734055 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:19.773981 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:19.901197 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:20.233723 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:20.266135 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:20.401823 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:20.735447 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:20.766501 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:20.905295 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:21.234542 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:21.273457 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:21.402519 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:21.737296 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:21.770563 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:21.903808 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:22.254718 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:22.305031 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:22.413284 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:22.736981 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:22.765244 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:22.903431 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:23.233926 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:23.266132 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:23.401704 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:23.734491 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:23.767077 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:23.901335 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:24.234767 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:24.269264 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:24.402275 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:24.733694 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:24.765057 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:24.903525 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:25.313372 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:25.314424 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:25.532691 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:25.781388 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:25.785161 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:25.902023 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:26.236186 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:26.267190 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:26.402625 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:26.742011 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:26.767792 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:26.902496 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:27.234018 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:27.265754 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:27.401966 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:27.734496 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:27.765844 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:27.901785 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:28.233717 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:28.265614 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:28.401922 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:28.734604 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:28.766270 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:28.901251 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:29.233662 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:29.264770 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:29.402288 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:30.005439 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:30.005686 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:30.009680 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:30.233774 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:30.265690 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:30.400955 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:30.734488 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:30.765301 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:30.901141 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:31.233958 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:31.265383 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:31.401732 1075954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:47:31.744766 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:31.784385 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:31.918035 1075954 kapi.go:107] duration metric: took 1m13.521704474s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 12:47:32.236327 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:32.266762 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:32.737759 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:32.768145 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:33.256684 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:33.270676 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:33.735428 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:33.765916 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:47:34.234171 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:34.266607 1075954 kapi.go:107] duration metric: took 1m14.507317762s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 12:47:34.733441 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:35.233892 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:35.735098 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:36.233814 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:36.735323 1075954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:47:37.236804 1075954 kapi.go:107] duration metric: took 1m15.007115484s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 12:47:37.239112 1075954 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-106685 cluster.
	I0318 12:47:37.240789 1075954 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 12:47:37.242383 1075954 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 12:47:37.243991 1075954 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, yakd, inspektor-gadget, nvidia-device-plugin, metrics-server, helm-tiller, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0318 12:47:37.245489 1075954 addons.go:505] duration metric: took 1m30.347320584s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns yakd inspektor-gadget nvidia-device-plugin metrics-server helm-tiller default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0318 12:47:37.245540 1075954 start.go:245] waiting for cluster config update ...
	I0318 12:47:37.245574 1075954 start.go:254] writing updated cluster config ...
	I0318 12:47:37.245895 1075954 ssh_runner.go:195] Run: rm -f paused
	I0318 12:47:37.304028 1075954 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:47:37.306234 1075954 out.go:177] * Done! kubectl is now configured to use "addons-106685" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.027471248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cac3ea4c-9785-438b-97d2-ce5aae6d8413 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.028177692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cac3ea4c-9785-438b-97d2-ce5aae6d8413 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.040853787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4d0c3e2-3dc3-4370-987b-1e2e25719fbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.041003340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4d0c3e2-3dc3-4370-987b-1e2e25719fbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.042950494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"co
ntainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639d7cbd635bec7a03e433b3cc4baca80eb0ab5101ec4eb2add72f33ddcf4cb9,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1710766053349846911,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5
185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: ad19b0e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66,PodSandboxId:13ceb8e22f1b49502ebbe7551baf8830de4c648ca58f47ac94ddf2db676d7ea9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1710766051389189804,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-76dc478dd8-h86zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernet
es.pod.uid: 82ecf218-fe2e-4983-ac66-4aceed2fb70e,},Annotations:map[string]string{io.kubernetes.container.hash: 217ad5d2,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efa3659d6d1eb75a0bd3c9794b44859ccde04b08e91f5ca51e072a420d4a7fae,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0
01958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1710766043767143966,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 6f780dee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fa199d2eebb92ad729c83c4ae7050a834efef626a70fda6bf1e3f03b6a660f,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5
dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1710766042106174498,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 38b04f60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ab9d6b3fea078f0800ba2b91a8a3d10997c4bc3ed05d0b4b9d1cf36a4a2bb4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sh
a256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1710766041130848747,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 285c4c05,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf669c3480f1038adad49ae09bd8b3c1fd9f511e491c04b45b1f4840703a68e1,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,M
etadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1710766039625272400,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: aa486d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dbb378e5b4006c55ff54db6c9ae3210057e97f8655cff6a04fc432f1b3877,PodSandboxId:a3266c83
ddcd511b07506f8466b9009b3dc6bfa660d775afbf5c121f32e72163,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1710766037423689248,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1779b3-86ae-429b-9ea1-11ea3b7dd11f,},Annotations:map[string]string{io.kubernetes.container.hash: fdc232d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7e80ed5fdbb0bf562b6e5e2bc330de6d795a2f1eb9474554f2ba90ce65132e
,PodSandboxId:391dd7f665481cef418ac45dbfa5a8ae215fc567e3189a4b8157f946e7509d0b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1710766035359080333,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8500ab8e-1f4b-4d6c-8ea7-183a45765ccd,},Annotations:map[string]string{io.kubernetes.container.hash: b7e4ba26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9eaffca84bf0092cd1e0c0e39404e2b60d5
f1067b79beb7b0ddde293f53f4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1710766033852183224,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: e30db171,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61ae866119874a73ecbc37488e1861c7338beef7a1759dade1c9586c13d9614,PodSandboxId:3951568b95d2fd0f8dc88b00836bcbeb4214d6ec25938a7416dc31ba2f77f43b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766029990201391,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2gcn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1095c47c-fd36-43a0-94f7-f1aae5fe1090,},Annotations:map[string]string{io.kubernetes.container.hash: 9510d259,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protoco
l\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c6a76fb3f64f3934a68cb9c07fa63fa2b26b65b1d3cba263722f50f42704cc8,PodSandboxId:cddc4b0ea0ba4b611198d5e5754642b7853e267219b386f6125960b31cd0ddfa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766014871103660,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.ku
bernetes.pod.name: snapshot-controller-58dbcc7b99-5vtqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf7d472-3dbe-449f-b580-e851b86a5850,},Annotations:map[string]string{io.kubernetes.container.hash: 9f152632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fedd3e7ba9eb7f645da2809d83ae6c67958bc59173710db9eee9101d32e3076,PodSandboxId:b4f88af046b28b9b035f8125a5312df9a176e68e905de05649f33308212f18a5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35eab485356b42d23b307a833f61565766d6421917fd7176f994c3fc04555a2c,State:CONTAINER_RUNNING,CreatedAt:1710766013230123553,Labels:map[string]string{i
o.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6548d5df46-t7xl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1471fde7-0973-4eaa-a6bc-a01b595958dc,},Annotations:map[string]string{io.kubernetes.container.hash: 83800f2d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f,PodSandboxId:6c6f6c464757632c6d223878367356b4334c3bb06bc165fe6db6a656e7f5600b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4ab
e27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1710765996411152981,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c4ec5a-1796-470c-b324-7c018ab2799d,},Annotations:map[string]string{io.kubernetes.container.hash: b622cca0,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ff60f6656212ae190bae254c369eb159ea877069b30f40188e0961a45a2706,PodSandboxId:22b2d5f4ae443508f60e65bfb561aaa9b92a474a4a60ac2061d03489d3f702e8,Met
adata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:074c438325eff35065c71bf4a00832ca6c77d7d34937a68ec301a5679932ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6df8d4b582f48a4fad5986341d6b27ec7ec2d5130db6f3b6898f3e49c076624,State:CONTAINER_RUNNING,CreatedAt:1710765986509993525,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rgg96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375e6fa2-ca11-40df-b093-1c93e6401092,},Annotations:map[string]string{io.kubernetes.container.hash: 6c277fec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb
1e21748bba2a22ef377aaae40aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70
eb10aee0f7330d5cee213613270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea17
67b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d914571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/
interceptors.go:74" id=a4d0c3e2-3dc3-4370-987b-1e2e25719fbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.048931326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3909b51c-ac8e-464e-82fe-f52319a2fc63 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.056044057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766079056003050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:477542,},InodesUsed:&UInt64Value{Value:184,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3909b51c-ac8e-464e-82fe-f52319a2fc63 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.085824324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f3f602c-f50e-474c-bf17-2960dc83df72 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.085994008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f3f602c-f50e-474c-bf17-2960dc83df72 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.087409826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3e3a2f0e01208bc408615ea4bbba221c8fe77b0199719b330804662cebfc8de,PodSandboxId:8f4d5c53e9f9170ebe278769548d7f6a1ee950d99e58a9fa456b63961cc29902,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1710766073451353683,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95ab8009-ede6-4b01-bcb6-cd68b09da803,},Annotations:map[string]string{io.kubernetes.container.hash: a13850dc,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279ec592df993c1fe9da6753ae54490f9d98f917c02752a25a7dbedc53580938,PodSandboxId:3383cc547c2a6ebea5ad419f82c9632bf6a94598c86f71ad0d74bbd4da9ad369,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1710766056818347412,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-d9pkd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3941952b-6285-4bc9-ae33-4e5fb135b104,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6b0709,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639d7cbd635bec7a03e433b3cc4baca80eb0ab5101ec4eb2add72f33ddcf4cb9,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1710766053349846911,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-tdddd
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: ad19b0e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66,PodSandboxId:13ceb8e22f1b49502ebbe7551baf8830de4c648ca58f47ac94ddf2db676d7ea9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1710766051389189804,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: i
ngress-nginx-controller-76dc478dd8-h86zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 82ecf218-fe2e-4983-ac66-4aceed2fb70e,},Annotations:map[string]string{io.kubernetes.container.hash: 217ad5d2,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efa3659d6d1eb75a0bd3c9794b44859ccde04b08e91f5ca51e072a420d4a7fae,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1710766043767143966,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 6f780dee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fa199d2eebb92ad729c83c4ae7050a834efef626a70fda6bf1e3f03b6a660f,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,}
,Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1710766042106174498,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 38b04f60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ab9d6b3fea078f0800ba2b91a8a3d10997c4bc3ed05d0b4b9d1cf36a4a2bb4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata
{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1710766041130848747,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 285c4c05,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf669c3480f1038adad49ae09bd8b3c1fd9f511e491c04b45
b1f4840703a68e1,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1710766039625272400,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: aa486d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:953dbb378e5b4006c55ff54db6c9ae3210057e97f8655cff6a04fc432f1b3877,PodSandboxId:a3266c83ddcd511b07506f8466b9009b3dc6bfa660d775afbf5c121f32e72163,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1710766037423689248,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1779b3-86ae-429b-9ea1-11ea3b7dd11f,},Annotations:map[string]string{io.kubernetes.container.hash: fdc232d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7e80ed5fdbb0bf562b6e5e2bc330de6d795a2f1eb9474554f2ba90ce65132e,PodSandboxId:391dd7f665481cef418ac45dbfa5a8ae215fc567e3189a4b8157f946e7509d0b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1710766035359080333,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8500ab8e-1f4b-4d6c-8ea7-183a45765ccd,},Annotations:map[string]string{io.kubernetes.container.hash: b7e4ba26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9eaffca84bf0092cd1e0c0e39404e2b60d5f1067b79beb7b0ddde293f53f4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1710766033852183224,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: e30db171,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b6
2bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61ae866119874a73ecbc37488e1861c7338beef7a1759dade1c9586c13d9614,PodSandboxId:3951568b95d2fd0f8dc88b00836bcbeb4214d6ec25938a7416dc31ba2f77f43b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766029990201391,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2gcn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1095c47c-fd36-43a0-94f7-f1a
ae5fe1090,},Annotations:map[string]string{io.kubernetes.container.hash: 9510d259,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-
97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.k
ubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c6a76fb3f64f3934a68cb9c07fa63fa2b26b65b1d3cba263722f50f42704cc8,PodSandboxId:cddc4b0ea0ba4b611198d5e5754642b7853e267219b386f6125960b31cd0ddfa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c
9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766014871103660,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-5vtqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf7d472-3dbe-449f-b580-e851b86a5850,},Annotations:map[string]string{io.kubernetes.container.hash: 9f152632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fedd3e7ba9eb7f645da2809d83ae6c67958bc59173710db9eee9101d32e3076,PodSandboxId:b4f88af046b28b9b035f8125a5312df9a176e68e905de05649f33308212f18a5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:35eab485356b42d23b307a833f61565766d6421917fd7176f994c3fc04555a2c,State:CONTAINER_RUNNING,CreatedAt:1710766013230123553,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6548d5df46-t7xl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1471fde7-0973-4eaa-a6bc-a01b595958dc,},Annotations:map[string]string{io.kubernetes.container.hash: 83800f2d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pat
h-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e7fe2f55adef04799de5ea8b0f32ff87df300f20ad94186b44de5ff7250573,PodSandboxId:e86fa0beabc62cf3a7e1ba61086118d148e8e5
539cdf8609fb855645ef851b29,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653,State:CONTAINER_EXITED,CreatedAt:1710766001766992832,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-j97lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea57f10-a30d-4291-9636-1e99d163e226,},Annotations:map[string]string{io.kubernetes.container.hash: 87b117c,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f,PodSandboxId:6c6f6c464757632c6d223878367356b4334c3bb06bc165fe6db6a656e7f5600b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1710765996411152981,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c4ec5a-1796-470c-b324-7c018ab2799d,},Annotations:map[string]string{io.kubernetes.container.hash: b622cca0,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ff60f6656212ae190bae254c369eb159ea877069b30f40188e0961a45a2706,PodSandboxId:22b2d5f4ae443508f60e65bfb561aaa9b92a474a4a60ac2061d03489d3f702e8,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:074c438325eff35065c71bf4a00832ca6c77d7d34937a68ec301a5679932ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6df8d4b582f48a4fad5986341d6b27ec7ec2d5130db6f3b6898f3e49c076624,State:CONTAINER_RUNNING,CreatedAt:1710765986509993525,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rgg96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375e6fa2-ca11-40df-b093-1c93e6401092,},Annotations:map[string]string{io.kubernetes.container.hash: 6c277fec,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae40aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"c
ontainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974
d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d9
14571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},
Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f3f602c-f50e-474c-bf17-2960dc83df72 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.247109692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96ca66fd-b020-43d3-a19a-a9f1c17dcfe1 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.247214564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96ca66fd-b020-43d3-a19a-a9f1c17dcfe1 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.248749218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ccb1aa4-a1b9-4804-8554-321b095aec7c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.249864999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766079249835990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:477542,},InodesUsed:&UInt64Value{Value:184,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ccb1aa4-a1b9-4804-8554-321b095aec7c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.250712300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70a06d72-9990-49f2-9bc7-da288e143ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.250798083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70a06d72-9990-49f2-9bc7-da288e143ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.251417299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3e3a2f0e01208bc408615ea4bbba221c8fe77b0199719b330804662cebfc8de,PodSandboxId:8f4d5c53e9f9170ebe278769548d7f6a1ee950d99e58a9fa456b63961cc29902,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1710766073451353683,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95ab8009-ede6-4b01-bcb6-cd68b09da803,},Annotations:map[string]string{io.kubernetes.container.hash: a13850dc,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279ec592df993c1fe9da6753ae54490f9d98f917c02752a25a7dbedc53580938,PodSandboxId:3383cc547c2a6ebea5ad419f82c9632bf6a94598c86f71ad0d74bbd4da9ad369,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1710766056818347412,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-d9pkd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3941952b-6285-4bc9-ae33-4e5fb135b104,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6b0709,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639d7cbd635bec7a03e433b3cc4baca80eb0ab5101ec4eb2add72f33ddcf4cb9,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1710766053349846911,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-tdddd
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: ad19b0e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66,PodSandboxId:13ceb8e22f1b49502ebbe7551baf8830de4c648ca58f47ac94ddf2db676d7ea9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1710766051389189804,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: i
ngress-nginx-controller-76dc478dd8-h86zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 82ecf218-fe2e-4983-ac66-4aceed2fb70e,},Annotations:map[string]string{io.kubernetes.container.hash: 217ad5d2,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efa3659d6d1eb75a0bd3c9794b44859ccde04b08e91f5ca51e072a420d4a7fae,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1710766043767143966,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 6f780dee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fa199d2eebb92ad729c83c4ae7050a834efef626a70fda6bf1e3f03b6a660f,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,}
,Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1710766042106174498,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 38b04f60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ab9d6b3fea078f0800ba2b91a8a3d10997c4bc3ed05d0b4b9d1cf36a4a2bb4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata
{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1710766041130848747,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 285c4c05,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf669c3480f1038adad49ae09bd8b3c1fd9f511e491c04b45
b1f4840703a68e1,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1710766039625272400,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: aa486d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:953dbb378e5b4006c55ff54db6c9ae3210057e97f8655cff6a04fc432f1b3877,PodSandboxId:a3266c83ddcd511b07506f8466b9009b3dc6bfa660d775afbf5c121f32e72163,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1710766037423689248,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1779b3-86ae-429b-9ea1-11ea3b7dd11f,},Annotations:map[string]string{io.kubernetes.container.hash: fdc232d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7e80ed5fdbb0bf562b6e5e2bc330de6d795a2f1eb9474554f2ba90ce65132e,PodSandboxId:391dd7f665481cef418ac45dbfa5a8ae215fc567e3189a4b8157f946e7509d0b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1710766035359080333,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8500ab8e-1f4b-4d6c-8ea7-183a45765ccd,},Annotations:map[string]string{io.kubernetes.container.hash: b7e4ba26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9eaffca84bf0092cd1e0c0e39404e2b60d5f1067b79beb7b0ddde293f53f4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1710766033852183224,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: e30db171,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b6
2bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61ae866119874a73ecbc37488e1861c7338beef7a1759dade1c9586c13d9614,PodSandboxId:3951568b95d2fd0f8dc88b00836bcbeb4214d6ec25938a7416dc31ba2f77f43b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766029990201391,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2gcn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1095c47c-fd36-43a0-94f7-f1a
ae5fe1090,},Annotations:map[string]string{io.kubernetes.container.hash: 9510d259,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-
97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.k
ubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c6a76fb3f64f3934a68cb9c07fa63fa2b26b65b1d3cba263722f50f42704cc8,PodSandboxId:cddc4b0ea0ba4b611198d5e5754642b7853e267219b386f6125960b31cd0ddfa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c
9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766014871103660,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-5vtqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf7d472-3dbe-449f-b580-e851b86a5850,},Annotations:map[string]string{io.kubernetes.container.hash: 9f152632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fedd3e7ba9eb7f645da2809d83ae6c67958bc59173710db9eee9101d32e3076,PodSandboxId:b4f88af046b28b9b035f8125a5312df9a176e68e905de05649f33308212f18a5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:35eab485356b42d23b307a833f61565766d6421917fd7176f994c3fc04555a2c,State:CONTAINER_RUNNING,CreatedAt:1710766013230123553,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6548d5df46-t7xl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1471fde7-0973-4eaa-a6bc-a01b595958dc,},Annotations:map[string]string{io.kubernetes.container.hash: 83800f2d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pat
h-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e7fe2f55adef04799de5ea8b0f32ff87df300f20ad94186b44de5ff7250573,PodSandboxId:e86fa0beabc62cf3a7e1ba61086118d148e8e5
539cdf8609fb855645ef851b29,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653,State:CONTAINER_EXITED,CreatedAt:1710766001766992832,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-j97lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea57f10-a30d-4291-9636-1e99d163e226,},Annotations:map[string]string{io.kubernetes.container.hash: 87b117c,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f,PodSandboxId:6c6f6c464757632c6d223878367356b4334c3bb06bc165fe6db6a656e7f5600b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1710765996411152981,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c4ec5a-1796-470c-b324-7c018ab2799d,},Annotations:map[string]string{io.kubernetes.container.hash: b622cca0,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ff60f6656212ae190bae254c369eb159ea877069b30f40188e0961a45a2706,PodSandboxId:22b2d5f4ae443508f60e65bfb561aaa9b92a474a4a60ac2061d03489d3f702e8,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:074c438325eff35065c71bf4a00832ca6c77d7d34937a68ec301a5679932ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6df8d4b582f48a4fad5986341d6b27ec7ec2d5130db6f3b6898f3e49c076624,State:CONTAINER_RUNNING,CreatedAt:1710765986509993525,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rgg96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375e6fa2-ca11-40df-b093-1c93e6401092,},Annotations:map[string]string{io.kubernetes.container.hash: 6c277fec,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae40aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"c
ontainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974
d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d9
14571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},
Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70a06d72-9990-49f2-9bc7-da288e143ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.311007345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59655200-9007-47a4-bbd2-f15796aef59d name=/runtime.v1.RuntimeService/Version
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.311080562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59655200-9007-47a4-bbd2-f15796aef59d name=/runtime.v1.RuntimeService/Version
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.312461036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c97762c-e26a-40a8-a6ea-70fd3db78199 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.313838657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766079313810603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:477542,},InodesUsed:&UInt64Value{Value:184,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c97762c-e26a-40a8-a6ea-70fd3db78199 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.316452110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=844ecf61-d7f2-4e93-9081-cd7183b6c940 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.316692714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=844ecf61-d7f2-4e93-9081-cd7183b6c940 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.318350746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3e3a2f0e01208bc408615ea4bbba221c8fe77b0199719b330804662cebfc8de,PodSandboxId:8f4d5c53e9f9170ebe278769548d7f6a1ee950d99e58a9fa456b63961cc29902,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1710766073451353683,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95ab8009-ede6-4b01-bcb6-cd68b09da803,},Annotations:map[string]string{io.kubernetes.container.hash: a13850dc,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279ec592df993c1fe9da6753ae54490f9d98f917c02752a25a7dbedc53580938,PodSandboxId:3383cc547c2a6ebea5ad419f82c9632bf6a94598c86f71ad0d74bbd4da9ad369,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1710766056818347412,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-d9pkd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 3941952b-6285-4bc9-ae33-4e5fb135b104,},Annotations:map[string]string{io.kubernetes.container.hash: 9c6b0709,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384,PodSandboxId:1f955c546533c04351a221119cc0e5ea964cfaedd954af94a11c43d18a181e9f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710766056216935340,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-v52wf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 979481e8-a8f3-42f0-a864-0cbbb970295f,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bfc0e54,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639d7cbd635bec7a03e433b3cc4baca80eb0ab5101ec4eb2add72f33ddcf4cb9,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1710766053349846911,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-tdddd
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: ad19b0e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5ce82d167f30edf3061e3dc9bf0e0f3c25d73ed0bf0d3a3cb5d7896f167d66,PodSandboxId:13ceb8e22f1b49502ebbe7551baf8830de4c648ca58f47ac94ddf2db676d7ea9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1710766051389189804,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: i
ngress-nginx-controller-76dc478dd8-h86zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 82ecf218-fe2e-4983-ac66-4aceed2fb70e,},Annotations:map[string]string{io.kubernetes.container.hash: 217ad5d2,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efa3659d6d1eb75a0bd3c9794b44859ccde04b08e91f5ca51e072a420d4a7fae,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8
s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1710766043767143966,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 6f780dee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73fa199d2eebb92ad729c83c4ae7050a834efef626a70fda6bf1e3f03b6a660f,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,}
,Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1710766042106174498,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 38b04f60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ab9d6b3fea078f0800ba2b91a8a3d10997c4bc3ed05d0b4b9d1cf36a4a2bb4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata
{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1710766041130848747,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: 285c4c05,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf669c3480f1038adad49ae09bd8b3c1fd9f511e491c04b45
b1f4840703a68e1,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1710766039625272400,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: aa486d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:953dbb378e5b4006c55ff54db6c9ae3210057e97f8655cff6a04fc432f1b3877,PodSandboxId:a3266c83ddcd511b07506f8466b9009b3dc6bfa660d775afbf5c121f32e72163,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1710766037423689248,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1779b3-86ae-429b-9ea1-11ea3b7dd11f,},Annotations:map[string]string{io.kubernetes.container.hash: fdc232d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:2a7e07706b051d68b0fbc95c9d5af012578d7ceeb37ea1417adba7c5c2fc54ad,PodSandboxId:5088889c909b8a6e462c7c28a782bc5c9b7404d890babaf2ecc22a6f09fdd344,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766035490354128,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q9qrg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17f868c2-2f63-421a-b995-ad4d2af21136,},Annotations:map[string]string{io.kubernetes.container.hash: cbb1eb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7e80ed5fdbb0bf562b6e5e2bc330de6d795a2f1eb9474554f2ba90ce65132e,PodSandboxId:391dd7f665481cef418ac45dbfa5a8ae215fc567e3189a4b8157f946e7509d0b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1710766035359080333,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8500ab8e-1f4b-4d6c-8ea7-183a45765ccd,},Annotations:map[string]string{io.kubernetes.container.hash: b7e4ba26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af9eaffca84bf0092cd1e0c0e39404e2b60d5f1067b79beb7b0ddde293f53f4,PodSandboxId:e57d6165c1e088df7fe4f1372606af28b99c17693ac0c0550354e37027a48d0c,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1710766033852183224,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-tdddd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683115f5-0641-4123-81af-970fe5185bbe,},Annotations:map[string]string{io.kubernetes.container.hash: e30db171,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:091499b134c6cca7e442deda95e77c651d428d7cebefb2951174d83f99319c75,PodSandboxId:e17d32473bc90069b121cc1ec9304089a5f268b92512aca2b055f3652bf9346f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710766032284347789,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-z2nvx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55455782-5bbd-45e1-9f4a-faf2d6cbbe54,},Annotations:map[string]string{io.kubernetes.container.hash: b6
2bde0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61ae866119874a73ecbc37488e1861c7338beef7a1759dade1c9586c13d9614,PodSandboxId:3951568b95d2fd0f8dc88b00836bcbeb4214d6ec25938a7416dc31ba2f77f43b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766029990201391,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2gcn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1095c47c-fd36-43a0-94f7-f1a
ae5fe1090,},Annotations:map[string]string{io.kubernetes.container.hash: 9510d259,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aeaee22ccbbc9036bd66f7ccf420057425b0c84e8b1e9f61fe618548dd4c6cd,PodSandboxId:adea854b67ff1488914764d88430f72ce43786ce2217dfa36bae9e79d2693b49,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710766029888150188,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-2l56b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 60ebbf3d-9ad7-46d5-8322-
97199a8c455a,},Annotations:map[string]string{io.kubernetes.container.hash: 4068f4ed,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427429c96eb55de64e97be2f2d5e55f3a89c66b7d96d5880c4243b484cf7203e,PodSandboxId:f21414a2cb91f80250dc81de6d5fd1773f36a6e812acbf8c69132032e7a004ef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710766017047320108,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.k
ubernetes.pod.name: metrics-server-69cf46c98-b9sd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2ad747-2bac-41dc-9aa5-96fa6e675413,},Annotations:map[string]string{io.kubernetes.container.hash: 75ec23c9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c6a76fb3f64f3934a68cb9c07fa63fa2b26b65b1d3cba263722f50f42704cc8,PodSandboxId:cddc4b0ea0ba4b611198d5e5754642b7853e267219b386f6125960b31cd0ddfa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c
9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710766014871103660,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-5vtqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf7d472-3dbe-449f-b580-e851b86a5850,},Annotations:map[string]string{io.kubernetes.container.hash: 9f152632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fedd3e7ba9eb7f645da2809d83ae6c67958bc59173710db9eee9101d32e3076,PodSandboxId:b4f88af046b28b9b035f8125a5312df9a176e68e905de05649f33308212f18a5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:35eab485356b42d23b307a833f61565766d6421917fd7176f994c3fc04555a2c,State:CONTAINER_RUNNING,CreatedAt:1710766013230123553,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6548d5df46-t7xl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1471fde7-0973-4eaa-a6bc-a01b595958dc,},Annotations:map[string]string{io.kubernetes.container.hash: 83800f2d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5f4c0e544a3c3a427017fcac95f8cd8d8d0b499a6ed1b8f962e7e2d1f69b4,PodSandboxId:951e107352e5f1c96fa58a30ee65726c2e81513380dfc4e2f60b192c3d25ef1a,Metadata:&ContainerMetadata{Name:local-pat
h-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710766007524018229,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-q66bb,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d517aa47-1e1c-40b4-804f-ee78b8b68ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 9f604234,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e7fe2f55adef04799de5ea8b0f32ff87df300f20ad94186b44de5ff7250573,PodSandboxId:e86fa0beabc62cf3a7e1ba61086118d148e8e5
539cdf8609fb855645ef851b29,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653,State:CONTAINER_EXITED,CreatedAt:1710766001766992832,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-j97lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea57f10-a30d-4291-9636-1e99d163e226,},Annotations:map[string]string{io.kubernetes.container.hash: 87b117c,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:91ed86e2b042c0a31571e29a98d95bc460f6bbf3d4274e802da56fe58ac7ea2f,PodSandboxId:6c6f6c464757632c6d223878367356b4334c3bb06bc165fe6db6a656e7f5600b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1710765996411152981,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c4ec5a-1796-470c-b324-7c018ab2799d,},Annotations:map[string]string{io.kubernetes.container.hash: b622cca0,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ff60f6656212ae190bae254c369eb159ea877069b30f40188e0961a45a2706,PodSandboxId:22b2d5f4ae443508f60e65bfb561aaa9b92a474a4a60ac2061d03489d3f702e8,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:074c438325eff35065c71bf4a00832ca6c77d7d34937a68ec301a5679932ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f6df8d4b582f48a4fad5986341d6b27ec7ec2d5130db6f3b6898f3e49c076624,State:CONTAINER_RUNNING,CreatedAt:1710765986509993525,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-rgg96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375e6fa2-ca11-40df-b093-1c93e6401092,},Annotations:map[string]string{io.kubernetes.container.hash: 6c277fec,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02,PodSandboxId:32799164389bb1e21748bba2a22ef377aaae40aa89160cd5635e0841cba83c4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765977459291432,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08aa63b8-ea35-4443-b7d6-fd52b4de2b95,},Annotations:map[string]string{io.kubernetes.container.hash: 411ddc87,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553,PodSandboxId:0c5d389f84cb0b3e61a3dab70eb10aee0f7330d5cee213613270d5b0fb05bf18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765969846966224,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qf446,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79feb7b9-b1c9-42a6-adbb-324e45aa35ec,},Annotations:map[string]string{io.kubernetes.container.hash: f73c9935,io.kubernetes.container.ports: [{\"name\":\"dns\",\"c
ontainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10,PodSandboxId:1ccbe7d02bdedb50f5466efb31819a04de97003567b853f945ffe793eca754e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765968347772894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll74j,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 5d5816ef-f9cb-492d-a933-16308c544452,},Annotations:map[string]string{io.kubernetes.container.hash: c5db1137,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d,PodSandboxId:bb3ab6d94fd816fa39290ad51524e61e4c6c8fb30da6a1cb7bc32ceb0ebd635d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765948834867355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-106685,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c711354b860ec72c4c9c1801ca1276b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21,PodSandboxId:3284869491f7b3045f8d4d22116e49ab5bffc48b402384b6435a4e3a7631ccc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765948842126583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1608f73716cdad8193b68b48974
d752,},Annotations:map[string]string{io.kubernetes.container.hash: 618d26b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ea1767b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2,PodSandboxId:31ccb175c26c39dd34ed8c67b99840e08f4f95abb27b0b4eb0715bd6a5664f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765948777812010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3ce20a5ebdba55d9
14571b099f373a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837,PodSandboxId:9103b05230128df989c7dc335121da75ac34af7c0de65475a91fe3ab71660daa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765948776039378,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-106685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afb7367f1b05b96362989188d3d982e,},
Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=844ecf61-d7f2-4e93-9081-cd7183b6c940 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:47:59 addons-106685 crio[673]: time="2024-03-18 12:47:59.381788489Z" level=debug msg="ImagePull (2): docker.io/library/nginx:latest (sha256:e78b137be3552e1f36d84cb01c533a23febe4c48f6fcdff5d5b26a45a636053b): 41387045 bytes (100.00%!)(MISSING)" file="server/image_pull.go:276" id=081fe336-b42d-43ce-8bd0-344308d29df2 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	e3e3a2f0e0120       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          6 seconds ago        Exited              registry-test                            0                   8f4d5c53e9f91       registry-test
	279ec592df993       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff                            22 seconds ago       Exited              gadget                                   2                   3383cc547c2a6       gadget-d9pkd
	263cc073a7ada       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 23 seconds ago       Running             gcp-auth                                 0                   1f955c546533c       gcp-auth-7d69788767-v52wf
	639d7cbd635be       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          26 seconds ago       Running             csi-snapshotter                          0                   e57d6165c1e08       csi-hostpathplugin-tdddd
	bc5ce82d167f3       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             28 seconds ago       Running             controller                               0                   13ceb8e22f1b4       ingress-nginx-controller-76dc478dd8-h86zf
	efa3659d6d1eb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          35 seconds ago       Running             csi-provisioner                          0                   e57d6165c1e08       csi-hostpathplugin-tdddd
	73fa199d2eebb       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            37 seconds ago       Running             liveness-probe                           0                   e57d6165c1e08       csi-hostpathplugin-tdddd
	57ab9d6b3fea0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           38 seconds ago       Running             hostpath                                 0                   e57d6165c1e08       csi-hostpathplugin-tdddd
	bf669c3480f10       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                39 seconds ago       Running             node-driver-registrar                    0                   e57d6165c1e08       csi-hostpathplugin-tdddd
	953dbb378e5b4       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              42 seconds ago       Running             csi-resizer                              0                   a3266c83ddcd5       csi-hostpath-resizer-0
	2a7e07706b051       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   44 seconds ago       Exited              patch                                    0                   5088889c909b8       ingress-nginx-admission-patch-q9qrg
	ff7e80ed5fdbb       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             44 seconds ago       Running             csi-attacher                             0                   391dd7f665481       csi-hostpath-attacher-0
	4af9eaffca84b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   45 seconds ago       Running             csi-external-health-monitor-controller   0                   e57d6165c1e08       csi-hostpathplugin-tdddd
	091499b134c6c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   47 seconds ago       Exited              create                                   0                   e17d32473bc90       ingress-nginx-admission-create-z2nvx
	a61ae86611987       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      49 seconds ago       Running             volume-snapshot-controller               0                   3951568b95d2f       snapshot-controller-58dbcc7b99-2gcn9
	1aeaee22ccbbc       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              49 seconds ago       Running             yakd                                     0                   adea854b67ff1       yakd-dashboard-9947fc6bf-2l56b
	427429c96eb55       registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca                        About a minute ago   Running             metrics-server                           0                   f21414a2cb91f       metrics-server-69cf46c98-b9sd4
	9c6a76fb3f64f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   cddc4b0ea0ba4       snapshot-controller-58dbcc7b99-5vtqp
	0fedd3e7ba9eb       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               About a minute ago   Running             cloud-spanner-emulator                   0                   b4f88af046b28       cloud-spanner-emulator-6548d5df46-t7xl8
	43e5f4c0e544a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   951e107352e5f       local-path-provisioner-78b46b4d5c-q66bb
	a4e7fe2f55ade       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5                              About a minute ago   Exited              registry-proxy                           0                   e86fa0beabc62       registry-proxy-j97lj
	91ed86e2b042c       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   6c6f6c4647576       kube-ingress-dns-minikube
	e3ff60f665621       nvcr.io/nvidia/k8s-device-plugin@sha256:074c438325eff35065c71bf4a00832ca6c77d7d34937a68ec301a5679932ba5f                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   22b2d5f4ae443       nvidia-device-plugin-daemonset-rgg96
	7afc9eafec80d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   32799164389bb       storage-provisioner
	cde2882b0d6b4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   0c5d389f84cb0       coredns-5dd5756b68-qf446
	7641c0665ab0b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             About a minute ago   Running             kube-proxy                               0                   1ccbe7d02bded       kube-proxy-ll74j
	89ba208b8a7ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   3284869491f7b       etcd-addons-106685
	43b114c0ac0bb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             2 minutes ago        Running             kube-scheduler                           0                   bb3ab6d94fd81       kube-scheduler-addons-106685
	2ea1767b3abd6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             2 minutes ago        Running             kube-controller-manager                  0                   31ccb175c26c3       kube-controller-manager-addons-106685
	cefbcb5554340       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             2 minutes ago        Running             kube-apiserver                           0                   9103b05230128       kube-apiserver-addons-106685
	
	
	==> coredns [cde2882b0d6b47fe2b3d6b538a77bbc47f7c08e7d6290994bc5b16ace6492553] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43648 - 65501 "HINFO IN 1605989327845107632.4539336522483561825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024121704s
	[INFO] 10.244.0.22:50704 - 65020 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000386299s
	[INFO] 10.244.0.22:50723 - 59421 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143048s
	[INFO] 10.244.0.22:59820 - 30130 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118336s
	[INFO] 10.244.0.22:35897 - 6736 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000083634s
	[INFO] 10.244.0.22:57164 - 9487 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077682s
	[INFO] 10.244.0.22:38010 - 62495 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081567s
	[INFO] 10.244.0.22:49102 - 23464 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000989906s
	[INFO] 10.244.0.22:34634 - 60861 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000909565s
	[INFO] 10.244.0.26:46975 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000686507s
	[INFO] 10.244.0.26:36447 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106237s
	
	
	==> describe nodes <==
	Name:               addons-106685
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-106685
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=addons-106685
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_45_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-106685
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-106685"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:45:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-106685
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:47:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:47:57 +0000   Mon, 18 Mar 2024 12:45:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:47:57 +0000   Mon, 18 Mar 2024 12:45:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:47:57 +0000   Mon, 18 Mar 2024 12:45:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:47:57 +0000   Mon, 18 Mar 2024 12:45:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    addons-106685
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 6613acac122d44d8b209206584b45567
	  System UUID:                6613acac-122d-44d8-b209-206584b45567
	  Boot ID:                    37380014-2689-4f7f-9b39-095d095ff374
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-t7xl8      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  gadget                      gadget-d9pkd                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  gcp-auth                    gcp-auth-7d69788767-v52wf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-h86zf    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         102s
	  kube-system                 coredns-5dd5756b68-qf446                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     112s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpathplugin-tdddd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 etcd-addons-106685                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-apiserver-addons-106685                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-controller-manager-addons-106685        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-proxy-ll74j                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-scheduler-addons-106685                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 metrics-server-69cf46c98-b9sd4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 nvidia-device-plugin-daemonset-rgg96         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 snapshot-controller-58dbcc7b99-2gcn9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 snapshot-controller-58dbcc7b99-5vtqp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  local-path-storage          local-path-provisioner-78b46b4d5c-q66bb      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-2l56b               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node addons-106685 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node addons-106685 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node addons-106685 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node addons-106685 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node addons-106685 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node addons-106685 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m4s                   kubelet          Node addons-106685 status is now: NodeReady
	  Normal  RegisteredNode           113s                   node-controller  Node addons-106685 event: Registered Node addons-106685 in Controller
	
	
	==> dmesg <==
	[  +0.248565] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +5.091341] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.059969] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.596361] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[  +1.401895] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.862123] systemd-fstab-generator[1249]: Ignoring "noauto" option for root device
	[  +0.096777] kauditd_printk_skb: 6 callbacks suppressed
	[Mar18 12:46] systemd-fstab-generator[1458]: Ignoring "noauto" option for root device
	[  +0.162208] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.055794] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.152305] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.571548] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.653936] kauditd_printk_skb: 25 callbacks suppressed
	[ +17.920585] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.155030] kauditd_printk_skb: 9 callbacks suppressed
	[Mar18 12:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.467189] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.759240] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.342717] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.363439] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.032283] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.005235] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.028644] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.035798] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.076685] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [89ba208b8a7ce1e26dc492e2f5955a053bb07ac98540105c19d26ec948813d21] <==
	{"level":"info","ts":"2024-03-18T12:47:17.292614Z","caller":"traceutil/trace.go:171","msg":"trace[435381913] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1056; }","duration":"385.580493ms","start":"2024-03-18T12:47:16.907025Z","end":"2024-03-18T12:47:17.292605Z","steps":["trace[435381913] 'agreement among raft nodes before linearized reading'  (duration: 383.922135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:17.292651Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:47:16.907012Z","time spent":"385.629341ms","remote":"127.0.0.1:52942","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-18T12:47:25.521928Z","caller":"traceutil/trace.go:171","msg":"trace[1125784687] linearizableReadLoop","detail":"{readStateIndex:1164; appliedIndex:1163; }","duration":"129.709371ms","start":"2024-03-18T12:47:25.392204Z","end":"2024-03-18T12:47:25.521913Z","steps":["trace[1125784687] 'read index received'  (duration: 129.503949ms)","trace[1125784687] 'applied index is now lower than readState.Index'  (duration: 204.66µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:47:25.522267Z","caller":"traceutil/trace.go:171","msg":"trace[361518930] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"171.081573ms","start":"2024-03-18T12:47:25.351171Z","end":"2024-03-18T12:47:25.522252Z","steps":["trace[361518930] 'process raft request'  (duration: 170.582251ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:25.522567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.36139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13724"}
	{"level":"info","ts":"2024-03-18T12:47:25.522651Z","caller":"traceutil/trace.go:171","msg":"trace[604180138] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1129; }","duration":"130.463215ms","start":"2024-03-18T12:47:25.39218Z","end":"2024-03-18T12:47:25.522643Z","steps":["trace[604180138] 'agreement among raft nodes before linearized reading'  (duration: 130.264335ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:29.991797Z","caller":"traceutil/trace.go:171","msg":"trace[1313279888] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"448.63376ms","start":"2024-03-18T12:47:29.543148Z","end":"2024-03-18T12:47:29.991782Z","steps":["trace[1313279888] 'process raft request'  (duration: 448.524393ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.99197Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:47:29.543132Z","time spent":"448.771267ms","remote":"127.0.0.1:53122","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1141 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-18T12:47:29.991803Z","caller":"traceutil/trace.go:171","msg":"trace[1304030548] linearizableReadLoop","detail":"{readStateIndex:1178; appliedIndex:1178; }","duration":"319.077634ms","start":"2024-03-18T12:47:29.672713Z","end":"2024-03-18T12:47:29.99179Z","steps":["trace[1304030548] 'read index received'  (duration: 319.072433ms)","trace[1304030548] 'applied index is now lower than readState.Index'  (duration: 4.27µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T12:47:29.992384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.525065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10814"}
	{"level":"info","ts":"2024-03-18T12:47:29.992426Z","caller":"traceutil/trace.go:171","msg":"trace[606132864] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1142; }","duration":"266.574168ms","start":"2024-03-18T12:47:29.725845Z","end":"2024-03-18T12:47:29.99242Z","steps":["trace[606132864] 'agreement among raft nodes before linearized reading'  (duration: 266.492068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.992451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.719284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-03-18T12:47:29.99257Z","caller":"traceutil/trace.go:171","msg":"trace[1732945] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1142; }","duration":"319.892275ms","start":"2024-03-18T12:47:29.672669Z","end":"2024-03-18T12:47:29.992561Z","steps":["trace[1732945] 'agreement among raft nodes before linearized reading'  (duration: 319.346423ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.992788Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:47:29.672656Z","time spent":"319.943657ms","remote":"127.0.0.1:53016","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":211,"response size":31,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	{"level":"warn","ts":"2024-03-18T12:47:29.99307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.239834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81546"}
	{"level":"info","ts":"2024-03-18T12:47:29.993114Z","caller":"traceutil/trace.go:171","msg":"trace[920829100] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1142; }","duration":"237.285497ms","start":"2024-03-18T12:47:29.755822Z","end":"2024-03-18T12:47:29.993108Z","steps":["trace[920829100] 'agreement among raft nodes before linearized reading'  (duration: 237.071962ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:29.992882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.487415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13724"}
	{"level":"info","ts":"2024-03-18T12:47:29.99563Z","caller":"traceutil/trace.go:171","msg":"trace[1628500299] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1142; }","duration":"103.231614ms","start":"2024-03-18T12:47:29.892387Z","end":"2024-03-18T12:47:29.995618Z","steps":["trace[1628500299] 'agreement among raft nodes before linearized reading'  (duration: 100.459376ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:33.245862Z","caller":"traceutil/trace.go:171","msg":"trace[1681670925] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"223.018168ms","start":"2024-03-18T12:47:33.02283Z","end":"2024-03-18T12:47:33.245848Z","steps":["trace[1681670925] 'process raft request'  (duration: 222.906107ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:33.246672Z","caller":"traceutil/trace.go:171","msg":"trace[1807651838] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"210.986663ms","start":"2024-03-18T12:47:33.035673Z","end":"2024-03-18T12:47:33.24666Z","steps":["trace[1807651838] 'process raft request'  (duration: 210.646092ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:33.246815Z","caller":"traceutil/trace.go:171","msg":"trace[1968247310] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"187.537816ms","start":"2024-03-18T12:47:33.059272Z","end":"2024-03-18T12:47:33.246809Z","steps":["trace[1968247310] 'process raft request'  (duration: 187.090536ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:55.273233Z","caller":"traceutil/trace.go:171","msg":"trace[977565410] transaction","detail":"{read_only:false; response_revision:1332; number_of_response:1; }","duration":"217.193575ms","start":"2024-03-18T12:47:55.055971Z","end":"2024-03-18T12:47:55.273165Z","steps":["trace[977565410] 'process raft request'  (duration: 212.164064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:47:55.274341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.143742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:47:55.274378Z","caller":"traceutil/trace.go:171","msg":"trace[720055926] range","detail":"{range_begin:/registry/services/specs/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:1332; }","duration":"218.246559ms","start":"2024-03-18T12:47:55.056122Z","end":"2024-03-18T12:47:55.274369Z","steps":["trace[720055926] 'agreement among raft nodes before linearized reading'  (duration: 218.080596ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:47:55.275058Z","caller":"traceutil/trace.go:171","msg":"trace[1290267950] linearizableReadLoop","detail":"{readStateIndex:1379; appliedIndex:1378; }","duration":"218.856924ms","start":"2024-03-18T12:47:55.05619Z","end":"2024-03-18T12:47:55.275046Z","steps":["trace[1290267950] 'read index received'  (duration: 211.912573ms)","trace[1290267950] 'applied index is now lower than readState.Index'  (duration: 6.943412ms)"],"step_count":2}
	
	
	==> gcp-auth [263cc073a7ada4ace2e1a4941894c21a8826bdfc4ec4cb2b78e6904210bd9384] <==
	2024/03/18 12:47:36 GCP Auth Webhook started!
	2024/03/18 12:47:37 Ready to marshal response ...
	2024/03/18 12:47:37 Ready to write response ...
	2024/03/18 12:47:37 Ready to marshal response ...
	2024/03/18 12:47:37 Ready to write response ...
	2024/03/18 12:47:48 Ready to marshal response ...
	2024/03/18 12:47:48 Ready to write response ...
	2024/03/18 12:47:48 Ready to marshal response ...
	2024/03/18 12:47:48 Ready to write response ...
	2024/03/18 12:47:49 Ready to marshal response ...
	2024/03/18 12:47:49 Ready to write response ...
	2024/03/18 12:47:52 Ready to marshal response ...
	2024/03/18 12:47:52 Ready to write response ...
	2024/03/18 12:47:57 Ready to marshal response ...
	2024/03/18 12:47:57 Ready to write response ...
	
	
	==> kernel <==
	 12:48:00 up 2 min,  0 users,  load average: 2.94, 1.42, 0.55
	Linux addons-106685 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cefbcb5554340c73779a2ac53c1d113bc4571a534c696beba62da4096b8a0837] <==
	I0318 12:46:18.977705       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.224.146"}
	I0318 12:46:18.999796       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0318 12:46:19.174797       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.111.163.238"}
	W0318 12:46:19.686045       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 12:46:20.711201       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 12:46:21.974766       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.109.247.155"}
	I0318 12:46:33.428746       1 trace.go:236] Trace[223887498]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:5536aa16-9325-44f1-87a9-1d9d5ff4791f,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-eqh6em6iwbcxl6jtsswa4y5bkm,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 12:46:32.916) (total time: 512ms):
	Trace[223887498]: ["GuaranteedUpdate etcd3" audit-id:5536aa16-9325-44f1-87a9-1d9d5ff4791f,key:/leases/kube-system/apiserver-eqh6em6iwbcxl6jtsswa4y5bkm,type:*coordination.Lease,resource:leases.coordination.k8s.io 512ms (12:46:32.916)
	Trace[223887498]:  ---"Txn call completed" 511ms (12:46:33.428)]
	Trace[223887498]: [512.54667ms] [512.54667ms] END
	I0318 12:46:51.578769       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0318 12:46:58.704963       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.102.136:443: connect: connection refused
	W0318 12:46:58.709216       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 12:46:58.709284       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0318 12:46:58.713870       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.102.136:443: connect: connection refused
	I0318 12:46:58.714060       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0318 12:46:58.714602       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.102.136:443: connect: connection refused
	E0318 12:46:58.730267       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.102.136:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.102.136:443: connect: connection refused
	I0318 12:46:58.908361       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 12:47:51.580218       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 12:47:57.141202       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0318 12:47:57.384935       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.45.212"}
	
	
	==> kube-controller-manager [2ea1767b3abd6811be9eb3b724908222760cb3c4f268af71b6f4d8c4c42016c2] <==
	I0318 12:47:20.589958       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0318 12:47:20.590580       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 12:47:20.603396       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 12:47:20.622550       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 12:47:20.623586       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0318 12:47:26.682307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="35.161537ms"
	I0318 12:47:26.682431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="67.734µs"
	I0318 12:47:26.700773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="43.203321ms"
	I0318 12:47:26.702394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="33.642µs"
	I0318 12:47:31.835454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="62.446µs"
	I0318 12:47:36.894379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="33.046281ms"
	I0318 12:47:36.896841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="51.786µs"
	I0318 12:47:37.497412       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0318 12:47:37.530416       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 12:47:37.679782       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 12:47:42.327004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="15.085922ms"
	I0318 12:47:42.327163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="66.263µs"
	I0318 12:47:50.048227       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 12:47:50.055925       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 12:47:50.114781       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 12:47:50.122288       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 12:47:51.765044       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 12:47:51.810156       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 12:47:54.927294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="3.713µs"
	I0318 12:47:56.524625       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="14.172µs"
	
	
	==> kube-proxy [7641c0665ab0b00f7ebec118d4b593d89282be513698be98c6999824b8184d10] <==
	I0318 12:46:09.960008       1 server_others.go:69] "Using iptables proxy"
	I0318 12:46:09.980085       1 node.go:141] Successfully retrieved node IP: 192.168.39.205
	I0318 12:46:10.087383       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:46:10.087462       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:46:10.096878       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:46:10.096941       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:46:10.097111       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:46:10.097141       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:46:10.099325       1 config.go:188] "Starting service config controller"
	I0318 12:46:10.099353       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:46:10.099373       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:46:10.099387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:46:10.099798       1 config.go:315] "Starting node config controller"
	I0318 12:46:10.099804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:46:10.200441       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:46:10.200448       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:46:10.200629       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [43b114c0ac0bb6acad93f85efd4dab6517f3e7f61f89ac56371f3e1df2f0458d] <==
	W0318 12:45:51.717109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:51.717122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.535854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.535968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.612145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:45:52.612236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:45:52.621660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.621709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.631786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:45:52.631862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:45:52.791022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:45:52.791072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:45:52.829899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.829947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.863852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:45:52.863928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 12:45:52.872921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:45:52.873000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:45:52.913160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 12:45:52.913276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 12:45:53.061394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:45:53.061548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:45:53.251248       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:45:53.251352       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:45:56.482755       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.082380    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d" path="/var/lib/kubelet/pods/bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d/volumes"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.345206    1256 topology_manager.go:215] "Topology Admit Handler" podUID="86b2012e-e452-410b-808c-3fc378157346" podNamespace="default" podName="nginx"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: E0318 12:47:57.345297    1256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d" containerName="tiller"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: E0318 12:47:57.345308    1256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe16147c-b2ae-4e50-9495-a1e8691f4762" containerName="helm-test"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: E0318 12:47:57.345316    1256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95ab8009-ede6-4b01-bcb6-cd68b09da803" containerName="registry-test"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: E0318 12:47:57.345327    1256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de58d932-6f78-479f-9d49-55619fa3881a" containerName="registry"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.345378    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="95ab8009-ede6-4b01-bcb6-cd68b09da803" containerName="registry-test"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.345391    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="de58d932-6f78-479f-9d49-55619fa3881a" containerName="registry"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.345398    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe16147c-b2ae-4e50-9495-a1e8691f4762" containerName="helm-test"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.345406    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d" containerName="tiller"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.425217    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92cvg\" (UniqueName: \"kubernetes.io/projected/de58d932-6f78-479f-9d49-55619fa3881a-kube-api-access-92cvg\") pod \"de58d932-6f78-479f-9d49-55619fa3881a\" (UID: \"de58d932-6f78-479f-9d49-55619fa3881a\") "
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.425434    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcr9d\" (UniqueName: \"kubernetes.io/projected/86b2012e-e452-410b-808c-3fc378157346-kube-api-access-lcr9d\") pod \"nginx\" (UID: \"86b2012e-e452-410b-808c-3fc378157346\") " pod="default/nginx"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.425588    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/86b2012e-e452-410b-808c-3fc378157346-gcp-creds\") pod \"nginx\" (UID: \"86b2012e-e452-410b-808c-3fc378157346\") " pod="default/nginx"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.438162    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de58d932-6f78-479f-9d49-55619fa3881a-kube-api-access-92cvg" (OuterVolumeSpecName: "kube-api-access-92cvg") pod "de58d932-6f78-479f-9d49-55619fa3881a" (UID: "de58d932-6f78-479f-9d49-55619fa3881a"). InnerVolumeSpecName "kube-api-access-92cvg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.528072    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-92cvg\" (UniqueName: \"kubernetes.io/projected/de58d932-6f78-479f-9d49-55619fa3881a-kube-api-access-92cvg\") on node \"addons-106685\" DevicePath \"\""
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.536964    1256 scope.go:117] "RemoveContainer" containerID="9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.780160    1256 scope.go:117] "RemoveContainer" containerID="9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: E0318 12:47:57.803843    1256 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd\": container with ID starting with 9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd not found: ID does not exist" containerID="9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.803915    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd"} err="failed to get container status \"9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd\": rpc error: code = NotFound desc = could not find container \"9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd\": container with ID starting with 9a0f10cb7f9458a6732e739c6a47a6e0e0de8f02f96d0f95769eb463d400e6cd not found: ID does not exist"
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.933541    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgfd7\" (UniqueName: \"kubernetes.io/projected/8ea57f10-a30d-4291-9636-1e99d163e226-kube-api-access-qgfd7\") pod \"8ea57f10-a30d-4291-9636-1e99d163e226\" (UID: \"8ea57f10-a30d-4291-9636-1e99d163e226\") "
	Mar 18 12:47:57 addons-106685 kubelet[1256]: I0318 12:47:57.962304    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea57f10-a30d-4291-9636-1e99d163e226-kube-api-access-qgfd7" (OuterVolumeSpecName: "kube-api-access-qgfd7") pod "8ea57f10-a30d-4291-9636-1e99d163e226" (UID: "8ea57f10-a30d-4291-9636-1e99d163e226"). InnerVolumeSpecName "kube-api-access-qgfd7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:47:58 addons-106685 kubelet[1256]: I0318 12:47:58.035377    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qgfd7\" (UniqueName: \"kubernetes.io/projected/8ea57f10-a30d-4291-9636-1e99d163e226-kube-api-access-qgfd7\") on node \"addons-106685\" DevicePath \"\""
	Mar 18 12:47:58 addons-106685 kubelet[1256]: I0318 12:47:58.931871    1256 scope.go:117] "RemoveContainer" containerID="a4e7fe2f55adef04799de5ea8b0f32ff87df300f20ad94186b44de5ff7250573"
	Mar 18 12:47:59 addons-106685 kubelet[1256]: I0318 12:47:59.046147    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8ea57f10-a30d-4291-9636-1e99d163e226" path="/var/lib/kubelet/pods/8ea57f10-a30d-4291-9636-1e99d163e226/volumes"
	Mar 18 12:47:59 addons-106685 kubelet[1256]: I0318 12:47:59.046744    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="de58d932-6f78-479f-9d49-55619fa3881a" path="/var/lib/kubelet/pods/de58d932-6f78-479f-9d49-55619fa3881a/volumes"
	
	
	==> storage-provisioner [7afc9eafec80d5224773f6c89b8c88d3bb8b2a83aba6a1a8994b1bde1ab13e02] <==
	I0318 12:46:18.547676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 12:46:18.605006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 12:46:18.605046       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 12:46:18.680105       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 12:46:18.680244       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-106685_2bc397de-f037-44b1-b5c8-1d7cc7c08af5!
	I0318 12:46:18.684149       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b765950-073e-425a-929e-664db04b17c7", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-106685_2bc397de-f037-44b1-b5c8-1d7cc7c08af5 became leader
	I0318 12:46:18.780654       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-106685_2bc397de-f037-44b1-b5c8-1d7cc7c08af5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-106685 -n addons-106685
helpers_test.go:261: (dbg) Run:  kubectl --context addons-106685 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-z2nvx ingress-nginx-admission-patch-q9qrg
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-106685 describe pod nginx task-pv-pod ingress-nginx-admission-create-z2nvx ingress-nginx-admission-patch-q9qrg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-106685 describe pod nginx task-pv-pod ingress-nginx-admission-create-z2nvx ingress-nginx-admission-patch-q9qrg: exit status 1 (82.293178ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-106685/192.168.39.205
	Start Time:       Mon, 18 Mar 2024 12:47:57 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcr9d (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-lcr9d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/nginx to addons-106685
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-106685/192.168.39.205
	Start Time:       Mon, 18 Mar 2024 12:47:52 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58kzt (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-58kzt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/task-pv-pod to addons-106685
	  Normal  Pulling    8s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z2nvx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-q9qrg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-106685 describe pod nginx task-pv-pod ingress-nginx-admission-create-z2nvx ingress-nginx-admission-patch-q9qrg: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (11.59s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-106685
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-106685: exit status 82 (2m0.506817056s)

                                                
                                                
-- stdout --
	* Stopping node "addons-106685"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-106685" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-106685
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-106685: exit status 11 (21.51512338s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-106685" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-106685
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-106685: exit status 11 (6.143586591s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-106685" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-106685
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-106685: exit status 11 (6.144484377s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-106685" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-044661 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-044661 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-044661 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-044661 --alsologtostderr -v=1] stderr:
I0318 13:04:48.166508 1084097 out.go:291] Setting OutFile to fd 1 ...
I0318 13:04:48.166681 1084097 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:48.166693 1084097 out.go:304] Setting ErrFile to fd 2...
I0318 13:04:48.166702 1084097 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:48.167023 1084097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
I0318 13:04:48.167354 1084097 mustload.go:65] Loading cluster: functional-044661
I0318 13:04:48.167875 1084097 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:48.168494 1084097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:48.168547 1084097 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:48.184202 1084097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
I0318 13:04:48.184703 1084097 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:48.185432 1084097 main.go:141] libmachine: Using API Version  1
I0318 13:04:48.185460 1084097 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:48.185901 1084097 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:48.186154 1084097 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:48.187942 1084097 host.go:66] Checking if "functional-044661" exists ...
I0318 13:04:48.188409 1084097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:48.188465 1084097 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:48.203263 1084097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
I0318 13:04:48.203758 1084097 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:48.204368 1084097 main.go:141] libmachine: Using API Version  1
I0318 13:04:48.204415 1084097 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:48.204804 1084097 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:48.205027 1084097 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:48.205208 1084097 api_server.go:166] Checking apiserver status ...
I0318 13:04:48.205286 1084097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0318 13:04:48.205319 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:48.208263 1084097 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:48.208706 1084097 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:48.208738 1084097 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:48.208917 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:48.209138 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:48.209319 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:48.209482 1084097 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:48.312851 1084097 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7591/cgroup
W0318 13:04:48.324813 1084097 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7591/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0318 13:04:48.324889 1084097 ssh_runner.go:195] Run: ls
I0318 13:04:48.334036 1084097 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8441/healthz ...
I0318 13:04:48.341294 1084097 api_server.go:279] https://192.168.39.198:8441/healthz returned 200:
ok
W0318 13:04:48.341347 1084097 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0318 13:04:48.341515 1084097 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:48.341544 1084097 addons.go:69] Setting dashboard=true in profile "functional-044661"
I0318 13:04:48.341552 1084097 addons.go:234] Setting addon dashboard=true in "functional-044661"
I0318 13:04:48.341578 1084097 host.go:66] Checking if "functional-044661" exists ...
I0318 13:04:48.341825 1084097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:48.341864 1084097 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:48.358454 1084097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42871
I0318 13:04:48.358937 1084097 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:48.359474 1084097 main.go:141] libmachine: Using API Version  1
I0318 13:04:48.359503 1084097 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:48.359903 1084097 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:48.360435 1084097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:48.360502 1084097 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:48.376397 1084097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
I0318 13:04:48.376821 1084097 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:48.377314 1084097 main.go:141] libmachine: Using API Version  1
I0318 13:04:48.377350 1084097 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:48.377666 1084097 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:48.377841 1084097 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:48.379526 1084097 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:48.381928 1084097 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0318 13:04:48.383716 1084097 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0318 13:04:48.385218 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0318 13:04:48.385243 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0318 13:04:48.385268 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:48.388484 1084097 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:48.388921 1084097 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:48.388951 1084097 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:48.389165 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:48.389364 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:48.389544 1084097 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:48.389661 1084097 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:48.503180 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0318 13:04:48.503212 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0318 13:04:48.524697 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0318 13:04:48.524726 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0318 13:04:48.548574 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0318 13:04:48.548611 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0318 13:04:48.570453 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0318 13:04:48.570486 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0318 13:04:48.595178 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0318 13:04:48.595208 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0318 13:04:48.621097 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0318 13:04:48.621132 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0318 13:04:48.643410 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0318 13:04:48.643443 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0318 13:04:48.664682 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0318 13:04:48.664717 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0318 13:04:48.686240 1084097 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0318 13:04:48.686278 1084097 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0318 13:04:48.707911 1084097 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0318 13:04:49.926453 1084097 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.218489079s)
I0318 13:04:49.926525 1084097 main.go:141] libmachine: Making call to close driver server
I0318 13:04:49.926545 1084097 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:49.926883 1084097 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:49.926902 1084097 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:49.926911 1084097 main.go:141] libmachine: Making call to close driver server
I0318 13:04:49.926919 1084097 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:49.927146 1084097 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:49.927157 1084097 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:49.927181 1084097 main.go:141] libmachine: (functional-044661) DBG | Closing plugin on server side
I0318 13:04:49.929140 1084097 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-044661 addons enable metrics-server

                                                
                                                
I0318 13:04:49.930618 1084097 addons.go:197] Writing out "functional-044661" config to set dashboard=true...
W0318 13:04:49.930853 1084097 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0318 13:04:49.931610 1084097 kapi.go:59] client config for functional-044661: &rest.Config{Host:"https://192.168.39.198:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0318 13:04:49.948986 1084097 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  64820e15-9acb-4d03-bdbd-b2c8a0471d6e 608 0 2024-03-18 13:04:49 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-03-18 13:04:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.2.206,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.99.2.206],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0318 13:04:49.949178 1084097 out.go:239] * Launching proxy ...
* Launching proxy ...
I0318 13:04:49.949265 1084097 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-044661 proxy --port 36195]
I0318 13:04:49.949569 1084097 dashboard.go:157] Waiting for kubectl to output host:port ...
I0318 13:04:50.010439 1084097 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0318 13:04:50.010489 1084097 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0318 13:04:50.027006 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9bec9cc9-0163-4770-83e2-6922a7707eb0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:49 GMT]] Body:0xc002392640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f6120 TLS:<nil>}
I0318 13:04:50.027102 1084097 retry.go:31] will retry after 80.974µs: Temporary Error: unexpected response code: 503
I0318 13:04:50.034237 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c416d9d-ecdd-40f9-9688-5ea2c36f26f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002026dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023b5320 TLS:<nil>}
I0318 13:04:50.034306 1084097 retry.go:31] will retry after 138.716µs: Temporary Error: unexpected response code: 503
I0318 13:04:50.040209 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6579c06d-ca09-4e34-a8f0-5d10ddb718ba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e28c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00203a240 TLS:<nil>}
I0318 13:04:50.040288 1084097 retry.go:31] will retry after 152.081µs: Temporary Error: unexpected response code: 503
I0318 13:04:50.054370 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f13f4736-655b-4883-a222-55b58c0e1252] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e29c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f6480 TLS:<nil>}
I0318 13:04:50.054443 1084097 retry.go:31] will retry after 273.199µs: Temporary Error: unexpected response code: 503
I0318 13:04:50.061806 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5647cb7-4e64-4f19-acf4-20e666d5a202] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002026ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f66c0 TLS:<nil>}
I0318 13:04:50.061866 1084097 retry.go:31] will retry after 709.87µs: Temporary Error: unexpected response code: 503
I0318 13:04:50.067757 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[702d35ce-758f-4795-b8c5-bbb712024257] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002026fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00203a480 TLS:<nil>}
I0318 13:04:50.067850 1084097 retry.go:31] will retry after 476.314µs: Temporary Error: unexpected response code: 503
I0318 13:04:50.077016 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[829e7135-f845-41ea-a4a9-1d0bb9f44ea7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e2ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00203a6c0 TLS:<nil>}
I0318 13:04:50.077128 1084097 retry.go:31] will retry after 1.088219ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.083742 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97cda991-5c3d-4a98-af98-45ef0a9ed5f2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e2c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f6900 TLS:<nil>}
I0318 13:04:50.083812 1084097 retry.go:31] will retry after 2.140845ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.091921 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2b3b2f3-0718-4bce-99b4-190dd0cb47fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002083a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f6b40 TLS:<nil>}
I0318 13:04:50.092002 1084097 retry.go:31] will retry after 2.644239ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.105239 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ec8b29a-b86a-411c-91ec-56845e9c8b98] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e2d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b9200 TLS:<nil>}
I0318 13:04:50.105311 1084097 retry.go:31] will retry after 4.274616ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.115278 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e41a7463-0556-48fb-ae22-f323af3e8b5b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002392880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f6ea0 TLS:<nil>}
I0318 13:04:50.115360 1084097 retry.go:31] will retry after 3.42894ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.124392 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f030c319-5515-4667-978a-d5c58c743f7d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002083b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023b5560 TLS:<nil>}
I0318 13:04:50.124459 1084097 retry.go:31] will retry after 12.485196ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.142654 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa8da9b1-2561-48da-84bd-ee110d7a2a67] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0023929c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b9440 TLS:<nil>}
I0318 13:04:50.142747 1084097 retry.go:31] will retry after 17.407699ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.166058 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e37c1c26-c585-4ffe-8213-1452a5bd587d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002392ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023b57a0 TLS:<nil>}
I0318 13:04:50.166135 1084097 retry.go:31] will retry after 14.960542ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.185595 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c76eed92-7d0d-423b-8c21-83758b72e28f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002392c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023b59e0 TLS:<nil>}
I0318 13:04:50.185685 1084097 retry.go:31] will retry after 21.263496ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.210940 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce1278e5-a368-4ba3-a642-51e1176e4a5a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e2e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023b5c20 TLS:<nil>}
I0318 13:04:50.211050 1084097 retry.go:31] will retry after 48.624382ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.266585 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[42767d7a-2186-447a-8093-7306607405ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002392d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f70e0 TLS:<nil>}
I0318 13:04:50.266688 1084097 retry.go:31] will retry after 66.886436ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.337664 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fb07c276-bbd2-4a67-92b0-dcf72b3897c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002083cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023b5e60 TLS:<nil>}
I0318 13:04:50.337738 1084097 retry.go:31] will retry after 115.767322ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.457087 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5f1272f8-cb66-467b-946c-badf3251295d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc0022e2fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b9680 TLS:<nil>}
I0318 13:04:50.457174 1084097 retry.go:31] will retry after 126.717331ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.588162 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eceea0e5-8711-4965-8e64-ef43bdec4642] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002027140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022f7320 TLS:<nil>}
I0318 13:04:50.588250 1084097 retry.go:31] will retry after 219.244465ms: Temporary Error: unexpected response code: 503
I0318 13:04:50.811310 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d36b0a85-f5b6-410a-9e80-0ae911103c01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:50 GMT]] Body:0xc002083e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00203a900 TLS:<nil>}
I0318 13:04:50.811386 1084097 retry.go:31] will retry after 392.697549ms: Temporary Error: unexpected response code: 503
I0318 13:04:51.209034 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35053f3b-7ecf-46dc-8ba3-1e19348e9708] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:51 GMT]] Body:0xc002083f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b98c0 TLS:<nil>}
I0318 13:04:51.209112 1084097 retry.go:31] will retry after 675.854982ms: Temporary Error: unexpected response code: 503
I0318 13:04:51.890421 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61868603-75ce-48af-bde9-9b10e7c12bf3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:51 GMT]] Body:0xc002392e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b9b00 TLS:<nil>}
I0318 13:04:51.890498 1084097 retry.go:31] will retry after 1.032459181s: Temporary Error: unexpected response code: 503
I0318 13:04:52.927672 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b3efbb8-49f3-4bd4-add5-34fe3fbf4aea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:52 GMT]] Body:0xc0021f00c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002442120 TLS:<nil>}
I0318 13:04:52.927762 1084097 retry.go:31] will retry after 864.180115ms: Temporary Error: unexpected response code: 503
I0318 13:04:53.796131 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c594dfd0-1a2d-4900-ad15-e89b203f440e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:53 GMT]] Body:0xc0021f0140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002442360 TLS:<nil>}
I0318 13:04:53.796222 1084097 retry.go:31] will retry after 2.090976287s: Temporary Error: unexpected response code: 503
I0318 13:04:55.890385 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb65abfb-fbca-464b-b0ae-66c956a5d127] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:55 GMT]] Body:0xc002027280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b9d40 TLS:<nil>}
I0318 13:04:55.890475 1084097 retry.go:31] will retry after 1.631261635s: Temporary Error: unexpected response code: 503
I0318 13:04:57.526673 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[042c8d53-3058-46c5-a572-be9e0c86bacb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:04:57 GMT]] Body:0xc0021f0280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00203ab40 TLS:<nil>}
I0318 13:04:57.526758 1084097 retry.go:31] will retry after 2.654480449s: Temporary Error: unexpected response code: 503
I0318 13:05:00.184961 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[31fa474b-528b-472d-acae-09e3bbfa89c5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:05:00 GMT]] Body:0xc002393080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fc000 TLS:<nil>}
I0318 13:05:00.185035 1084097 retry.go:31] will retry after 8.177110049s: Temporary Error: unexpected response code: 503
I0318 13:05:08.366140 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a03b0c6c-d565-49f5-887c-156d209961d0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:05:08 GMT]] Body:0xc0020273c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fc240 TLS:<nil>}
I0318 13:05:08.366262 1084097 retry.go:31] will retry after 12.460786553s: Temporary Error: unexpected response code: 503
I0318 13:05:20.831169 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95e6bef3-e2a1-4e3d-8579-9693bd147220] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:05:20 GMT]] Body:0xc0021f04c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fc480 TLS:<nil>}
I0318 13:05:20.831260 1084097 retry.go:31] will retry after 13.260615099s: Temporary Error: unexpected response code: 503
I0318 13:05:34.095800 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[decd1a37-d7ba-4c68-9e67-140a9f5361bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:05:34 GMT]] Body:0xc0020274c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fc5a0 TLS:<nil>}
I0318 13:05:34.095910 1084097 retry.go:31] will retry after 25.526171291s: Temporary Error: unexpected response code: 503
I0318 13:05:59.626173 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4b6c5e53-f80f-4524-af76-3fa8f23ef76d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:05:59 GMT]] Body:0xc0022e3100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fc7e0 TLS:<nil>}
I0318 13:05:59.626250 1084097 retry.go:31] will retry after 19.975549707s: Temporary Error: unexpected response code: 503
I0318 13:06:19.605616 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c37df07a-32e0-41b1-8e4e-12231e1a1694] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:06:19 GMT]] Body:0xc0022e3180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0024425a0 TLS:<nil>}
I0318 13:06:19.605693 1084097 retry.go:31] will retry after 58.270350643s: Temporary Error: unexpected response code: 503
I0318 13:07:17.884850 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f72b0527-02f3-4c04-b40b-c546c00478fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:07:17 GMT]] Body:0xc002026100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fca20 TLS:<nil>}
I0318 13:07:17.884939 1084097 retry.go:31] will retry after 1m21.094453576s: Temporary Error: unexpected response code: 503
I0318 13:08:38.984719 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[defcddbb-88f6-4aae-b3c7-460789f16738] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:08:38 GMT]] Body:0xc0021f0140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00203a000 TLS:<nil>}
I0318 13:08:38.984828 1084097 retry.go:31] will retry after 36.60757911s: Temporary Error: unexpected response code: 503
I0318 13:09:15.598479 1084097 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c58b230a-298a-42f4-af06-c0077ec7941e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 18 Mar 2024 13:09:15 GMT]] Body:0xc002026100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021fcc60 TLS:<nil>}
I0318 13:09:15.598583 1084097 retry.go:31] will retry after 42.260066464s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-044661 -n functional-044661
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 logs -n 25: (1.277750992s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-044661 ssh -- ls                                               | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | -la /mount-9p                                                             |                   |         |         |                     |                     |
	| image          | functional-044661 image rm                                                | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-044661                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| ssh            | functional-044661 ssh sudo                                                | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | umount -f /mount-9p                                                       |                   |         |         |                     |                     |
	| image          | functional-044661 image ls                                                | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	| mount          | -p functional-044661                                                      | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount1     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                   |         |         |                     |                     |
	| mount          | -p functional-044661                                                      | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount2     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                   |         |         |                     |                     |
	| ssh            | functional-044661 ssh findmnt                                             | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | -T /mount1                                                                |                   |         |         |                     |                     |
	| mount          | -p functional-044661                                                      | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount3     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                   |         |         |                     |                     |
	| image          | functional-044661 image load                                              | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| ssh            | functional-044661 ssh findmnt                                             | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | -T /mount1                                                                |                   |         |         |                     |                     |
	| ssh            | functional-044661 ssh findmnt                                             | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | -T /mount2                                                                |                   |         |         |                     |                     |
	| ssh            | functional-044661 ssh findmnt                                             | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | -T /mount3                                                                |                   |         |         |                     |                     |
	| mount          | -p functional-044661                                                      | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | --kill=true                                                               |                   |         |         |                     |                     |
	| update-context | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| image          | functional-044661 image ls                                                | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	| image          | functional-044661 image save --daemon                                     | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-044661                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | image ls --format short                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | image ls --format yaml                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| ssh            | functional-044661 ssh pgrep                                               | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC |                     |
	|                | buildkitd                                                                 |                   |         |         |                     |                     |
	| image          | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | image ls --format json                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-044661 image build -t                                          | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:05 UTC |
	|                | localhost/my-image:functional-044661                                      |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                   |         |         |                     |                     |
	| image          | functional-044661                                                         | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:04 UTC | 18 Mar 24 13:04 UTC |
	|                | image ls --format table                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-044661 image ls                                                | functional-044661 | jenkins | v1.32.0 | 18 Mar 24 13:05 UTC | 18 Mar 24 13:05 UTC |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:04:48
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:04:48.011739 1084070 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:04:48.011906 1084070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:04:48.011917 1084070 out.go:304] Setting ErrFile to fd 2...
	I0318 13:04:48.011921 1084070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:04:48.012245 1084070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:04:48.012814 1084070 out.go:298] Setting JSON to false
	I0318 13:04:48.013823 1084070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17235,"bootTime":1710749853,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:04:48.013929 1084070 start.go:139] virtualization: kvm guest
	I0318 13:04:48.016423 1084070 out.go:177] * [functional-044661] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0318 13:04:48.017862 1084070 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:04:48.019291 1084070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:04:48.017877 1084070 notify.go:220] Checking for updates...
	I0318 13:04:48.022114 1084070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:04:48.023772 1084070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:04:48.025260 1084070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:04:48.026727 1084070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:04:48.028465 1084070 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:04:48.028844 1084070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:04:48.028893 1084070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:04:48.044248 1084070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0318 13:04:48.044627 1084070 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:04:48.045250 1084070 main.go:141] libmachine: Using API Version  1
	I0318 13:04:48.045279 1084070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:04:48.045609 1084070 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:04:48.045806 1084070 main.go:141] libmachine: (functional-044661) Calling .DriverName
	I0318 13:04:48.046087 1084070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:04:48.046511 1084070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:04:48.046581 1084070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:04:48.061823 1084070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 13:04:48.062331 1084070 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:04:48.062833 1084070 main.go:141] libmachine: Using API Version  1
	I0318 13:04:48.062858 1084070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:04:48.063225 1084070 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:04:48.063393 1084070 main.go:141] libmachine: (functional-044661) Calling .DriverName
	I0318 13:04:48.097198 1084070 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0318 13:04:48.098498 1084070 start.go:297] selected driver: kvm2
	I0318 13:04:48.098523 1084070 start.go:901] validating driver "kvm2" against &{Name:functional-044661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-044661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:04:48.098630 1084070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:04:48.100868 1084070 out.go:177] 
	W0318 13:04:48.102114 1084070 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 13:04:48.103269 1084070 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.931106954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767388931078841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260145,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7e527f0-990f-4a3c-b905-214a0c164ee9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.931707404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89494354-1ac8-4850-8ce4-f862ba9571b6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.931798430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89494354-1ac8-4850-8ce4-f862ba9571b6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.932280979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdb656c49d152d74f2b317a4f9eb4b85df4e9e1be562d639ccba4a33c9a1dc6f,PodSandboxId:f74e5937a3ddd8fa829f3149b2134c1af9aec1d7567e559df5c0ce37aa12e2ce,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1710767096542229932,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64d29328-a344-484d-aef4-1328988742bc,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7e926b,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051cc3b461767ae4e1efd0ccd5c081f38e455dc51da116c6302a83e5e9d0cb2,PodSandboxId:916a67a1efdfb86181fa33d17ee414e77d33f31a71d3818a8fe4f6eb95ce04bc,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710767095791797781,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-m9pkt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ef3cd087-a01f-4379-830d-4e2806d83950,},Annotations:map[string]string{io.kubernetes.containe
r.hash: fa454345,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373346d075233044330a0abf48eb3f47a2aa5848d30012cb81b244c544a94606,PodSandboxId:9d53addf62995852f65a3bf29215c429cd164b012006d54219150138ad70047b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710767087570567874,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d066ed88-aef1-
43ad-a01f-c570dd6c903d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948e223,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8293db61940f60e5dbc994465ce6168f58281d2ec15f045b8f414903bdbc99,PodSandboxId:0e5657b9fc32fe7faff8e1530d06507fb2575c570c7709f848a49adff605c8e8,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710767077725604302,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-kt8t8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38fdc46-99bb-466e-a85b-
c3f2800cbebc,},Annotations:map[string]string{io.kubernetes.container.hash: d078068f,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0143674a5e4aae328cd2a98d18ed6527b5d52ae35924760d8e7a2ede48caadb,PodSandboxId:1facfbfc7ce8fd242de07dca54b2d83cfeaa14ed1b62b4124a2499ad6184edb8,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767063061385713,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-
node-d7447cc7f-lt97j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 371df4ae-c61b-414f-9460-d8bafb56af01,},Annotations:map[string]string{io.kubernetes.container.hash: 58731939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b181a8d8a6db154918a89c67c712133c15e81b5264b0145d4ef33bb0ce6a16,PodSandboxId:42d32bc6514e9fe4b96a78e2be07d21bac50beba2e6337d4b20aa0c41632a142,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767062978117836,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name:
hello-node-connect-55497b8b78-52h22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11e0448b-c51e-40b5-85c7-0c8f195ba010,},Annotations:map[string]string{io.kubernetes.container.hash: 30ae6b78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b9303e127fcbd29c8fa1c0f1caeb0a41638c2d93051a318ec2784268307e67,PodSandboxId:c0d36f2eb7a03f300bf9ce3f4dd25b730e20e83de988b8019649389ccce9863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767048960620505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d6dcce-caf3-4ca3-a51f-9b1f34aba0e3,},Annotations:map[string]string{io.kubernetes.container.hash: 8d162b2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb7c556a9674287544d856529d0ab9453806b7026882e111bdffabc8c25f692,PodSandboxId:9210d24c1dcaa7ec0029db9e5909e00d2b91a836b55aba03bac5574af45cc669,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047607207609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cqnlm,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a4cba0-5091-4123-9b18-554b2fce114c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d127deb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca41d06e94789fa06311fc849d68085c83452e03f77b77f0b5c469f7d89facf,PodSandboxId:539ba2d0ac9c86eaf04014376bf80a191738fc5a25184228f421c05bc0f3b523,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606
d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047512773792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vhf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b70a2-d392-4c69-ab82-11da787bf094,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4dfb87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6939267e1734ab0be1f6a081905ac3d5d1c6da66e7420419778003a31715eaab,PodSandboxId:9d23e582e9201ae91cadd04233114e7f83ba52272e97e6a892823be8c1e9995f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767047191049743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdk25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980c5e40-0f52-4f0f-8417-7dc8e3072b1a,},Annotations:map[string]string{io.kubernetes.container.hash: b76f2467,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee5a3077d6af7a82024f4824745aa8e97afa4fe163d038f9485c0be699a9300,PodSandboxId:cff17a059c4136fc8e63fb1aef38ab6f9ded8acb648862b0b44f7b4de4fbf97b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b
32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767027979702424,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd148618c332d156d980f973e894e9d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10f4103aedd0424f9e4f14418a36abcdae012dcfd0b6da0363469a647b7e9e8,PodSandboxId:fec862bd04df507f4c143be3c16876b25d6dd8f641d9ceabd4c1cd86fb4a03f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767027919728371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b7e9ba7f345b52a5170d5fee69251f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9822eea9c4fbffc45a5988ae06460a6b395d117320ac170d028ffe83fddc1d,PodSandboxId:d6d9506c22f843194ff4a80ec672fa50c9d54c532c0d7caee264c31190ce3096,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf
2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767027922601405,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81cc44d502c421c38b54718cde7c2bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5effa017,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2a3ecebaa3c029284ecdffad619497e4a1d268e2814c1735004dbe3f602617,PodSandboxId:2e158247ca4773f78dfd2152878096605bee5b2afafefe7184098efa3df8c01a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c25
7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767027870820276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fcd0ef7ab3c94d6a0d6e0a57891ba75,},Annotations:map[string]string{io.kubernetes.container.hash: 13b69ddd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89494354-1ac8-4850-8ce4-f862ba9571b6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.973158768Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7d602bf-516b-4328-9050-000a53ba36ec name=/runtime.v1.RuntimeService/Version
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.973242463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7d602bf-516b-4328-9050-000a53ba36ec name=/runtime.v1.RuntimeService/Version
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.974382895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86cb48d8-79a5-40ee-95ac-a63c319fc16f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.975386441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767388975357429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260145,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86cb48d8-79a5-40ee-95ac-a63c319fc16f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.976390563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=056a83e3-2c0c-42fd-815f-9c9f3bc0026d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.976445231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=056a83e3-2c0c-42fd-815f-9c9f3bc0026d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:48 functional-044661 crio[4959]: time="2024-03-18 13:09:48.976760807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdb656c49d152d74f2b317a4f9eb4b85df4e9e1be562d639ccba4a33c9a1dc6f,PodSandboxId:f74e5937a3ddd8fa829f3149b2134c1af9aec1d7567e559df5c0ce37aa12e2ce,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1710767096542229932,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64d29328-a344-484d-aef4-1328988742bc,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7e926b,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051cc3b461767ae4e1efd0ccd5c081f38e455dc51da116c6302a83e5e9d0cb2,PodSandboxId:916a67a1efdfb86181fa33d17ee414e77d33f31a71d3818a8fe4f6eb95ce04bc,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710767095791797781,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-m9pkt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ef3cd087-a01f-4379-830d-4e2806d83950,},Annotations:map[string]string{io.kubernetes.containe
r.hash: fa454345,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373346d075233044330a0abf48eb3f47a2aa5848d30012cb81b244c544a94606,PodSandboxId:9d53addf62995852f65a3bf29215c429cd164b012006d54219150138ad70047b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710767087570567874,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d066ed88-aef1-
43ad-a01f-c570dd6c903d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948e223,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8293db61940f60e5dbc994465ce6168f58281d2ec15f045b8f414903bdbc99,PodSandboxId:0e5657b9fc32fe7faff8e1530d06507fb2575c570c7709f848a49adff605c8e8,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710767077725604302,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-kt8t8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38fdc46-99bb-466e-a85b-
c3f2800cbebc,},Annotations:map[string]string{io.kubernetes.container.hash: d078068f,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0143674a5e4aae328cd2a98d18ed6527b5d52ae35924760d8e7a2ede48caadb,PodSandboxId:1facfbfc7ce8fd242de07dca54b2d83cfeaa14ed1b62b4124a2499ad6184edb8,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767063061385713,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-
node-d7447cc7f-lt97j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 371df4ae-c61b-414f-9460-d8bafb56af01,},Annotations:map[string]string{io.kubernetes.container.hash: 58731939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b181a8d8a6db154918a89c67c712133c15e81b5264b0145d4ef33bb0ce6a16,PodSandboxId:42d32bc6514e9fe4b96a78e2be07d21bac50beba2e6337d4b20aa0c41632a142,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767062978117836,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name:
hello-node-connect-55497b8b78-52h22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11e0448b-c51e-40b5-85c7-0c8f195ba010,},Annotations:map[string]string{io.kubernetes.container.hash: 30ae6b78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b9303e127fcbd29c8fa1c0f1caeb0a41638c2d93051a318ec2784268307e67,PodSandboxId:c0d36f2eb7a03f300bf9ce3f4dd25b730e20e83de988b8019649389ccce9863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767048960620505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d6dcce-caf3-4ca3-a51f-9b1f34aba0e3,},Annotations:map[string]string{io.kubernetes.container.hash: 8d162b2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb7c556a9674287544d856529d0ab9453806b7026882e111bdffabc8c25f692,PodSandboxId:9210d24c1dcaa7ec0029db9e5909e00d2b91a836b55aba03bac5574af45cc669,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047607207609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cqnlm,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a4cba0-5091-4123-9b18-554b2fce114c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d127deb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca41d06e94789fa06311fc849d68085c83452e03f77b77f0b5c469f7d89facf,PodSandboxId:539ba2d0ac9c86eaf04014376bf80a191738fc5a25184228f421c05bc0f3b523,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606
d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047512773792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vhf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b70a2-d392-4c69-ab82-11da787bf094,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4dfb87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6939267e1734ab0be1f6a081905ac3d5d1c6da66e7420419778003a31715eaab,PodSandboxId:9d23e582e9201ae91cadd04233114e7f83ba52272e97e6a892823be8c1e9995f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767047191049743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdk25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980c5e40-0f52-4f0f-8417-7dc8e3072b1a,},Annotations:map[string]string{io.kubernetes.container.hash: b76f2467,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee5a3077d6af7a82024f4824745aa8e97afa4fe163d038f9485c0be699a9300,PodSandboxId:cff17a059c4136fc8e63fb1aef38ab6f9ded8acb648862b0b44f7b4de4fbf97b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b
32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767027979702424,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd148618c332d156d980f973e894e9d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10f4103aedd0424f9e4f14418a36abcdae012dcfd0b6da0363469a647b7e9e8,PodSandboxId:fec862bd04df507f4c143be3c16876b25d6dd8f641d9ceabd4c1cd86fb4a03f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767027919728371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b7e9ba7f345b52a5170d5fee69251f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9822eea9c4fbffc45a5988ae06460a6b395d117320ac170d028ffe83fddc1d,PodSandboxId:d6d9506c22f843194ff4a80ec672fa50c9d54c532c0d7caee264c31190ce3096,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf
2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767027922601405,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81cc44d502c421c38b54718cde7c2bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5effa017,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2a3ecebaa3c029284ecdffad619497e4a1d268e2814c1735004dbe3f602617,PodSandboxId:2e158247ca4773f78dfd2152878096605bee5b2afafefe7184098efa3df8c01a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c25
7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767027870820276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fcd0ef7ab3c94d6a0d6e0a57891ba75,},Annotations:map[string]string{io.kubernetes.container.hash: 13b69ddd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=056a83e3-2c0c-42fd-815f-9c9f3bc0026d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.013631660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d29d0d5-9db8-4a98-aa08-36a7a5750628 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.013718209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d29d0d5-9db8-4a98-aa08-36a7a5750628 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.020084535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e318226-8603-4cd0-bc77-610cd99f7a56 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.022090406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767389021931021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260145,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e318226-8603-4cd0-bc77-610cd99f7a56 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.022953660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ca0f7fc-2460-4406-97cc-0692884b5876 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.023138766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ca0f7fc-2460-4406-97cc-0692884b5876 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.023455216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdb656c49d152d74f2b317a4f9eb4b85df4e9e1be562d639ccba4a33c9a1dc6f,PodSandboxId:f74e5937a3ddd8fa829f3149b2134c1af9aec1d7567e559df5c0ce37aa12e2ce,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1710767096542229932,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64d29328-a344-484d-aef4-1328988742bc,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7e926b,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051cc3b461767ae4e1efd0ccd5c081f38e455dc51da116c6302a83e5e9d0cb2,PodSandboxId:916a67a1efdfb86181fa33d17ee414e77d33f31a71d3818a8fe4f6eb95ce04bc,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710767095791797781,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-m9pkt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ef3cd087-a01f-4379-830d-4e2806d83950,},Annotations:map[string]string{io.kubernetes.containe
r.hash: fa454345,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373346d075233044330a0abf48eb3f47a2aa5848d30012cb81b244c544a94606,PodSandboxId:9d53addf62995852f65a3bf29215c429cd164b012006d54219150138ad70047b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710767087570567874,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d066ed88-aef1-
43ad-a01f-c570dd6c903d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948e223,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8293db61940f60e5dbc994465ce6168f58281d2ec15f045b8f414903bdbc99,PodSandboxId:0e5657b9fc32fe7faff8e1530d06507fb2575c570c7709f848a49adff605c8e8,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710767077725604302,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-kt8t8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38fdc46-99bb-466e-a85b-
c3f2800cbebc,},Annotations:map[string]string{io.kubernetes.container.hash: d078068f,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0143674a5e4aae328cd2a98d18ed6527b5d52ae35924760d8e7a2ede48caadb,PodSandboxId:1facfbfc7ce8fd242de07dca54b2d83cfeaa14ed1b62b4124a2499ad6184edb8,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767063061385713,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-
node-d7447cc7f-lt97j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 371df4ae-c61b-414f-9460-d8bafb56af01,},Annotations:map[string]string{io.kubernetes.container.hash: 58731939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b181a8d8a6db154918a89c67c712133c15e81b5264b0145d4ef33bb0ce6a16,PodSandboxId:42d32bc6514e9fe4b96a78e2be07d21bac50beba2e6337d4b20aa0c41632a142,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767062978117836,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name:
hello-node-connect-55497b8b78-52h22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11e0448b-c51e-40b5-85c7-0c8f195ba010,},Annotations:map[string]string{io.kubernetes.container.hash: 30ae6b78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b9303e127fcbd29c8fa1c0f1caeb0a41638c2d93051a318ec2784268307e67,PodSandboxId:c0d36f2eb7a03f300bf9ce3f4dd25b730e20e83de988b8019649389ccce9863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767048960620505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d6dcce-caf3-4ca3-a51f-9b1f34aba0e3,},Annotations:map[string]string{io.kubernetes.container.hash: 8d162b2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb7c556a9674287544d856529d0ab9453806b7026882e111bdffabc8c25f692,PodSandboxId:9210d24c1dcaa7ec0029db9e5909e00d2b91a836b55aba03bac5574af45cc669,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047607207609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cqnlm,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a4cba0-5091-4123-9b18-554b2fce114c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d127deb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca41d06e94789fa06311fc849d68085c83452e03f77b77f0b5c469f7d89facf,PodSandboxId:539ba2d0ac9c86eaf04014376bf80a191738fc5a25184228f421c05bc0f3b523,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606
d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047512773792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vhf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b70a2-d392-4c69-ab82-11da787bf094,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4dfb87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6939267e1734ab0be1f6a081905ac3d5d1c6da66e7420419778003a31715eaab,PodSandboxId:9d23e582e9201ae91cadd04233114e7f83ba52272e97e6a892823be8c1e9995f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767047191049743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdk25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980c5e40-0f52-4f0f-8417-7dc8e3072b1a,},Annotations:map[string]string{io.kubernetes.container.hash: b76f2467,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee5a3077d6af7a82024f4824745aa8e97afa4fe163d038f9485c0be699a9300,PodSandboxId:cff17a059c4136fc8e63fb1aef38ab6f9ded8acb648862b0b44f7b4de4fbf97b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b
32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767027979702424,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd148618c332d156d980f973e894e9d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10f4103aedd0424f9e4f14418a36abcdae012dcfd0b6da0363469a647b7e9e8,PodSandboxId:fec862bd04df507f4c143be3c16876b25d6dd8f641d9ceabd4c1cd86fb4a03f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767027919728371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b7e9ba7f345b52a5170d5fee69251f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9822eea9c4fbffc45a5988ae06460a6b395d117320ac170d028ffe83fddc1d,PodSandboxId:d6d9506c22f843194ff4a80ec672fa50c9d54c532c0d7caee264c31190ce3096,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf
2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767027922601405,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81cc44d502c421c38b54718cde7c2bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5effa017,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2a3ecebaa3c029284ecdffad619497e4a1d268e2814c1735004dbe3f602617,PodSandboxId:2e158247ca4773f78dfd2152878096605bee5b2afafefe7184098efa3df8c01a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c25
7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767027870820276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fcd0ef7ab3c94d6a0d6e0a57891ba75,},Annotations:map[string]string{io.kubernetes.container.hash: 13b69ddd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ca0f7fc-2460-4406-97cc-0692884b5876 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.059925386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04afb936-cf03-4e1c-9de5-ed126122e0ee name=/runtime.v1.RuntimeService/Version
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.060067548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04afb936-cf03-4e1c-9de5-ed126122e0ee name=/runtime.v1.RuntimeService/Version
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.061498491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37142d1e-47b5-4850-9181-4a3ceeda0d85 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.062563499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767389062537213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260145,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37142d1e-47b5-4850-9181-4a3ceeda0d85 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.063392831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=120b8ec6-a715-4556-8adb-c65ea2ec79f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.063477324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=120b8ec6-a715-4556-8adb-c65ea2ec79f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:09:49 functional-044661 crio[4959]: time="2024-03-18 13:09:49.063759110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdb656c49d152d74f2b317a4f9eb4b85df4e9e1be562d639ccba4a33c9a1dc6f,PodSandboxId:f74e5937a3ddd8fa829f3149b2134c1af9aec1d7567e559df5c0ce37aa12e2ce,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1710767096542229932,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64d29328-a344-484d-aef4-1328988742bc,},Annotations:map[string]string{io.kubernetes.container.hash: 9b7e926b,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c051cc3b461767ae4e1efd0ccd5c081f38e455dc51da116c6302a83e5e9d0cb2,PodSandboxId:916a67a1efdfb86181fa33d17ee414e77d33f31a71d3818a8fe4f6eb95ce04bc,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1710767095791797781,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-7fd5cb4ddc-m9pkt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ef3cd087-a01f-4379-830d-4e2806d83950,},Annotations:map[string]string{io.kubernetes.containe
r.hash: fa454345,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373346d075233044330a0abf48eb3f47a2aa5848d30012cb81b244c544a94606,PodSandboxId:9d53addf62995852f65a3bf29215c429cd164b012006d54219150138ad70047b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1710767087570567874,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d066ed88-aef1-
43ad-a01f-c570dd6c903d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948e223,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8293db61940f60e5dbc994465ce6168f58281d2ec15f045b8f414903bdbc99,PodSandboxId:0e5657b9fc32fe7faff8e1530d06507fb2575c570c7709f848a49adff605c8e8,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710767077725604302,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-kt8t8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38fdc46-99bb-466e-a85b-
c3f2800cbebc,},Annotations:map[string]string{io.kubernetes.container.hash: d078068f,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0143674a5e4aae328cd2a98d18ed6527b5d52ae35924760d8e7a2ede48caadb,PodSandboxId:1facfbfc7ce8fd242de07dca54b2d83cfeaa14ed1b62b4124a2499ad6184edb8,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767063061385713,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-
node-d7447cc7f-lt97j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 371df4ae-c61b-414f-9460-d8bafb56af01,},Annotations:map[string]string{io.kubernetes.container.hash: 58731939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b181a8d8a6db154918a89c67c712133c15e81b5264b0145d4ef33bb0ce6a16,PodSandboxId:42d32bc6514e9fe4b96a78e2be07d21bac50beba2e6337d4b20aa0c41632a142,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710767062978117836,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name:
hello-node-connect-55497b8b78-52h22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11e0448b-c51e-40b5-85c7-0c8f195ba010,},Annotations:map[string]string{io.kubernetes.container.hash: 30ae6b78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b9303e127fcbd29c8fa1c0f1caeb0a41638c2d93051a318ec2784268307e67,PodSandboxId:c0d36f2eb7a03f300bf9ce3f4dd25b730e20e83de988b8019649389ccce9863e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767048960620505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d6dcce-caf3-4ca3-a51f-9b1f34aba0e3,},Annotations:map[string]string{io.kubernetes.container.hash: 8d162b2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb7c556a9674287544d856529d0ab9453806b7026882e111bdffabc8c25f692,PodSandboxId:9210d24c1dcaa7ec0029db9e5909e00d2b91a836b55aba03bac5574af45cc669,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047607207609,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cqnlm,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a4cba0-5091-4123-9b18-554b2fce114c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d127deb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca41d06e94789fa06311fc849d68085c83452e03f77b77f0b5c469f7d89facf,PodSandboxId:539ba2d0ac9c86eaf04014376bf80a191738fc5a25184228f421c05bc0f3b523,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606
d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767047512773792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vhf2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b70a2-d392-4c69-ab82-11da787bf094,},Annotations:map[string]string{io.kubernetes.container.hash: 6a4dfb87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6939267e1734ab0be1f6a081905ac3d5d1c6da66e7420419778003a31715eaab,PodSandboxId:9d23e582e9201ae91cadd04233114e7f83ba52272e97e6a892823be8c1e9995f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767047191049743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdk25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980c5e40-0f52-4f0f-8417-7dc8e3072b1a,},Annotations:map[string]string{io.kubernetes.container.hash: b76f2467,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee5a3077d6af7a82024f4824745aa8e97afa4fe163d038f9485c0be699a9300,PodSandboxId:cff17a059c4136fc8e63fb1aef38ab6f9ded8acb648862b0b44f7b4de4fbf97b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b
32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767027979702424,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd148618c332d156d980f973e894e9d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10f4103aedd0424f9e4f14418a36abcdae012dcfd0b6da0363469a647b7e9e8,PodSandboxId:fec862bd04df507f4c143be3c16876b25d6dd8f641d9ceabd4c1cd86fb4a03f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e718
8be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767027919728371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b7e9ba7f345b52a5170d5fee69251f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9822eea9c4fbffc45a5988ae06460a6b395d117320ac170d028ffe83fddc1d,PodSandboxId:d6d9506c22f843194ff4a80ec672fa50c9d54c532c0d7caee264c31190ce3096,Metadata:&ContainerMetadata{Name:etcd,Attempt:5,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf
2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767027922601405,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81cc44d502c421c38b54718cde7c2bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5effa017,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2a3ecebaa3c029284ecdffad619497e4a1d268e2814c1735004dbe3f602617,PodSandboxId:2e158247ca4773f78dfd2152878096605bee5b2afafefe7184098efa3df8c01a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c25
7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767027870820276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-044661,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fcd0ef7ab3c94d6a0d6e0a57891ba75,},Annotations:map[string]string{io.kubernetes.container.hash: 13b69ddd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=120b8ec6-a715-4556-8adb-c65ea2ec79f3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	bdb656c49d152       docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7                  4 minutes ago       Running             myfrontend                  0                   f74e5937a3ddd       sp-pod
	c051cc3b46176       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   4 minutes ago       Running             dashboard-metrics-scraper   0                   916a67a1efdfb       dashboard-metrics-scraper-7fd5cb4ddc-m9pkt
	373346d075233       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              5 minutes ago       Exited              mount-munger                0                   9d53addf62995       busybox-mount
	1f8293db61940       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  5 minutes ago       Running             mysql                       0                   0e5657b9fc32f       mysql-859648c796-kt8t8
	b0143674a5e4a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               5 minutes ago       Running             echoserver                  0                   1facfbfc7ce8f       hello-node-d7447cc7f-lt97j
	39b181a8d8a6d       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               5 minutes ago       Running             echoserver                  0                   42d32bc6514e9       hello-node-connect-55497b8b78-52h22
	50b9303e127fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 5 minutes ago       Running             storage-provisioner         0                   c0d36f2eb7a03       storage-provisioner
	cbb7c556a9674       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                 5 minutes ago       Running             coredns                     0                   9210d24c1dcaa       coredns-5dd5756b68-cqnlm
	cca41d06e9478       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                 5 minutes ago       Running             coredns                     0                   539ba2d0ac9c8       coredns-5dd5756b68-vhf2s
	6939267e1734a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                 5 minutes ago       Running             kube-proxy                  0                   9d23e582e9201       kube-proxy-cdk25
	1ee5a3077d6af       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                 6 minutes ago       Running             kube-scheduler              3                   cff17a059c413       kube-scheduler-functional-044661
	3c9822eea9c4f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                 6 minutes ago       Running             etcd                        5                   d6d9506c22f84       etcd-functional-044661
	a10f4103aedd0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                 6 minutes ago       Running             kube-controller-manager     3                   fec862bd04df5       kube-controller-manager-functional-044661
	6c2a3ecebaa3c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                 6 minutes ago       Running             kube-apiserver              1                   2e158247ca477       kube-apiserver-functional-044661
	
	
	==> coredns [cbb7c556a9674287544d856529d0ab9453806b7026882e111bdffabc8c25f692] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [cca41d06e94789fa06311fc849d68085c83452e03f77b77f0b5c469f7d89facf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               functional-044661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-044661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=functional-044661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_03_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-044661
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:09:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:05:27 +0000   Mon, 18 Mar 2024 13:03:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:05:27 +0000   Mon, 18 Mar 2024 13:03:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:05:27 +0000   Mon, 18 Mar 2024 13:03:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:05:27 +0000   Mon, 18 Mar 2024 13:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    functional-044661
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 a20ed9a9ce88493398645f3086180323
	  System UUID:                a20ed9a9-ce88-4933-9864-5f3086180323
	  Boot ID:                    2d249f62-5c98-444e-9428-939f11aa5770
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-55497b8b78-52h22           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  default                     hello-node-d7447cc7f-lt97j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  default                     mysql-859648c796-kt8t8                        600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    5m29s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 coredns-5dd5756b68-cqnlm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m43s
	  kube-system                 coredns-5dd5756b68-vhf2s                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m43s
	  kube-system                 etcd-functional-044661                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-apiserver-functional-044661              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-controller-manager-functional-044661     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-proxy-cdk25                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-scheduler-functional-044661              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-m9pkt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4jj74         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%!)(MISSING)  700m (35%!)(MISSING)
	  memory             752Mi (19%!)(MISSING)  1040Mi (27%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m41s  kube-proxy       
	  Normal  Starting                 5m55s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s  kubelet          Node functional-044661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s  kubelet          Node functional-044661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s  kubelet          Node functional-044661 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             5m55s  kubelet          Node functional-044661 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m45s  kubelet          Node functional-044661 status is now: NodeReady
	  Normal  RegisteredNode           5m44s  node-controller  Node functional-044661 event: Registered Node functional-044661 in Controller
	
	
	==> dmesg <==
	[ +10.221131] systemd-fstab-generator[3372]: Ignoring "noauto" option for root device
	[ +17.012705] kauditd_printk_skb: 50 callbacks suppressed
	[  +1.787231] systemd-fstab-generator[3756]: Ignoring "noauto" option for root device
	[Mar18 12:57] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.636085] systemd-fstab-generator[4779]: Ignoring "noauto" option for root device
	[  +0.181886] systemd-fstab-generator[4821]: Ignoring "noauto" option for root device
	[  +0.195966] systemd-fstab-generator[4835]: Ignoring "noauto" option for root device
	[  +0.165400] systemd-fstab-generator[4847]: Ignoring "noauto" option for root device
	[  +0.277492] systemd-fstab-generator[4871]: Ignoring "noauto" option for root device
	[Mar18 12:58] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.530310] systemd-fstab-generator[5210]: Ignoring "noauto" option for root device
	[ +11.655028] kauditd_printk_skb: 74 callbacks suppressed
	[Mar18 12:59] systemd-fstab-generator[5716]: Ignoring "noauto" option for root device
	[  +5.617773] kauditd_printk_skb: 37 callbacks suppressed
	[Mar18 13:03] systemd-fstab-generator[7428]: Ignoring "noauto" option for root device
	[  +7.296654] systemd-fstab-generator[7749]: Ignoring "noauto" option for root device
	[  +0.100974] kauditd_printk_skb: 68 callbacks suppressed
	[Mar18 13:04] systemd-fstab-generator[7953]: Ignoring "noauto" option for root device
	[  +0.096080] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.615334] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.523483] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.174549] kauditd_printk_skb: 40 callbacks suppressed
	[ +18.650410] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.389425] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.023905] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [3c9822eea9c4fbffc45a5988ae06460a6b395d117320ac170d028ffe83fddc1d] <==
	{"level":"info","ts":"2024-03-18T13:03:48.801439Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:functional-044661 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:03:48.801584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:03:48.80281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:03:48.80568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:03:48.806645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-03-18T13:03:48.80686Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:03:48.807011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:03:48.807491Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:03:48.831046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:03:48.831091Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:04:32.267393Z","caller":"traceutil/trace.go:171","msg":"trace[811936807] linearizableReadLoop","detail":"{readStateIndex:536; appliedIndex:535; }","duration":"175.458682ms","start":"2024-03-18T13:04:32.091903Z","end":"2024-03-18T13:04:32.267362Z","steps":["trace[811936807] 'read index received'  (duration: 175.315708ms)","trace[811936807] 'applied index is now lower than readState.Index'  (duration: 142.513µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:04:32.267662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.800064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/\" range_end:\"/registry/services/specs/default0\" ","response":"range_response_count:4 size:2631"}
	{"level":"info","ts":"2024-03-18T13:04:32.267701Z","caller":"traceutil/trace.go:171","msg":"trace[658354777] range","detail":"{range_begin:/registry/services/specs/default/; range_end:/registry/services/specs/default0; response_count:4; response_revision:520; }","duration":"175.902577ms","start":"2024-03-18T13:04:32.091783Z","end":"2024-03-18T13:04:32.267685Z","steps":["trace[658354777] 'agreement among raft nodes before linearized reading'  (duration: 175.733487ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:04:32.267927Z","caller":"traceutil/trace.go:171","msg":"trace[611067780] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"248.108655ms","start":"2024-03-18T13:04:32.01981Z","end":"2024-03-18T13:04:32.267919Z","steps":["trace[611067780] 'process raft request'  (duration: 247.456203ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:04:36.911899Z","caller":"traceutil/trace.go:171","msg":"trace[1557938428] linearizableReadLoop","detail":"{readStateIndex:542; appliedIndex:541; }","duration":"116.160173ms","start":"2024-03-18T13:04:36.795724Z","end":"2024-03-18T13:04:36.911884Z","steps":["trace[1557938428] 'read index received'  (duration: 115.963585ms)","trace[1557938428] 'applied index is now lower than readState.Index'  (duration: 196.082µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:04:36.912125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.452995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:12519"}
	{"level":"info","ts":"2024-03-18T13:04:36.912116Z","caller":"traceutil/trace.go:171","msg":"trace[808348007] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"314.603874ms","start":"2024-03-18T13:04:36.597499Z","end":"2024-03-18T13:04:36.912102Z","steps":["trace[808348007] 'process raft request'  (duration: 314.282568ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:04:36.912155Z","caller":"traceutil/trace.go:171","msg":"trace[1127898908] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:525; }","duration":"116.503019ms","start":"2024-03-18T13:04:36.795645Z","end":"2024-03-18T13:04:36.912148Z","steps":["trace[1127898908] 'agreement among raft nodes before linearized reading'  (duration: 116.303742ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:04:36.912586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:04:36.597483Z","time spent":"314.680517ms","remote":"127.0.0.1:45610","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1759,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/busybox-mount\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/busybox-mount\" value_size:1715 >> failure:<>"}
	{"level":"info","ts":"2024-03-18T13:04:41.508117Z","caller":"traceutil/trace.go:171","msg":"trace[1157246332] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"192.024286ms","start":"2024-03-18T13:04:41.316079Z","end":"2024-03-18T13:04:41.508103Z","steps":["trace[1157246332] 'process raft request'  (duration: 191.866343ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:04:42.537618Z","caller":"traceutil/trace.go:171","msg":"trace[1370026411] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"132.284707ms","start":"2024-03-18T13:04:42.405319Z","end":"2024-03-18T13:04:42.537604Z","steps":["trace[1370026411] 'process raft request'  (duration: 131.831844ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:04:43.924441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.090273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T13:04:43.924539Z","caller":"traceutil/trace.go:171","msg":"trace[97464449] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"232.203945ms","start":"2024-03-18T13:04:43.692324Z","end":"2024-03-18T13:04:43.924528Z","steps":["trace[97464449] 'range keys from in-memory index tree'  (duration: 232.014918ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:04:43.924705Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.967291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13554"}
	{"level":"info","ts":"2024-03-18T13:04:43.924837Z","caller":"traceutil/trace.go:171","msg":"trace[2087928013] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:543; }","duration":"129.101725ms","start":"2024-03-18T13:04:43.795727Z","end":"2024-03-18T13:04:43.924829Z","steps":["trace[2087928013] 'range keys from in-memory index tree'  (duration: 128.877528ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:09:49 up 15 min,  0 users,  load average: 0.05, 0.35, 0.32
	Linux functional-044661 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c2a3ecebaa3c029284ecdffad619497e4a1d268e2814c1735004dbe3f602617] <==
	I0318 13:03:51.022241       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:03:51.022346       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:03:51.022375       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:03:51.022474       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:03:51.805384       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 13:03:51.814202       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 13:03:51.814281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:03:52.467016       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:03:52.517305       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:03:52.616372       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0318 13:03:52.626204       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0318 13:03:52.628084       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:03:52.637555       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:03:52.930690       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:03:54.159465       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:03:54.183338       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0318 13:03:54.199570       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:04:06.535792       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0318 13:04:06.681424       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0318 13:04:13.556559       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.212.157"}
	I0318 13:04:17.879451       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.39.155"}
	I0318 13:04:18.523713       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.78.128"}
	I0318 13:04:20.923456       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.53.44"}
	I0318 13:04:49.817080       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.2.206"}
	I0318 13:04:49.889365       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.250.175"}
	
	
	==> kube-controller-manager [a10f4103aedd0424f9e4f14418a36abcdae012dcfd0b6da0363469a647b7e9e8] <==
	E0318 13:04:49.534064       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 13:04:49.552304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.132945ms"
	E0318 13:04:49.552425       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 13:04:49.552754       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 13:04:49.552884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="24.322207ms"
	E0318 13:04:49.552894       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 13:04:49.552915       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 13:04:49.561232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.756631ms"
	E0318 13:04:49.561272       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 13:04:49.561305       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 13:04:49.564929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.792004ms"
	E0318 13:04:49.565042       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 13:04:49.565073       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 13:04:49.590695       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4jj74"
	I0318 13:04:49.598389       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-m9pkt"
	I0318 13:04:49.622784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="45.238765ms"
	I0318 13:04:49.629517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.459775ms"
	I0318 13:04:49.688186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.771438ms"
	I0318 13:04:49.688326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.005µs"
	I0318 13:04:49.736502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="113.585082ms"
	I0318 13:04:49.737821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="216.356µs"
	I0318 13:04:49.807338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="70.599561ms"
	I0318 13:04:49.807450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="56.134µs"
	I0318 13:04:56.325319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="13.889964ms"
	I0318 13:04:56.325710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="61.814µs"
	
	
	==> kube-proxy [6939267e1734ab0be1f6a081905ac3d5d1c6da66e7420419778003a31715eaab] <==
	I0318 13:04:07.834043       1 server_others.go:69] "Using iptables proxy"
	I0318 13:04:07.855823       1 node.go:141] Successfully retrieved node IP: 192.168.39.198
	I0318 13:04:07.903477       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:04:07.903501       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:04:07.906542       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:04:07.906631       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:04:07.906828       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:04:07.907106       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:04:07.909220       1 config.go:188] "Starting service config controller"
	I0318 13:04:07.909285       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:04:07.909347       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:04:07.909366       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:04:07.910783       1 config.go:315] "Starting node config controller"
	I0318 13:04:07.910847       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:04:08.011060       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:04:08.011191       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:04:08.011217       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1ee5a3077d6af7a82024f4824745aa8e97afa4fe163d038f9485c0be699a9300] <==
	W0318 13:03:50.976302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:03:50.978004       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:03:50.978048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:03:50.978046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:03:50.978284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:03:50.978510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:03:51.793880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:03:51.794072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:03:51.861546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:03:51.861668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:03:51.931114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:03:51.931139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:03:52.057860       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:03:52.058203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:03:52.094433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:03:52.094637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:03:52.094913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:03:52.095033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 13:03:52.149536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:03:52.149599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:03:52.151017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:03:52.151062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:03:52.256527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:03:52.256573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:03:52.538558       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:04:55 functional-044661 kubelet[7756]: I0318 13:04:55.053344    7756 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="1bd0afc360df0bd7aacb06b63e2294315faff2f9df5d66d3237ef9c9ab150480" err="rpc error: code = NotFound desc = could not find container \"1bd0afc360df0bd7aacb06b63e2294315faff2f9df5d66d3237ef9c9ab150480\": container with ID starting with 1bd0afc360df0bd7aacb06b63e2294315faff2f9df5d66d3237ef9c9ab150480 not found: ID does not exist"
	Mar 18 13:04:55 functional-044661 kubelet[7756]: E0318 13:04:55.055731    7756 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50475c62c4d4751b422d47cb52f8cbcd7ff3f7926f8005ff50db9c0a0ae2667f\": container with ID starting with 50475c62c4d4751b422d47cb52f8cbcd7ff3f7926f8005ff50db9c0a0ae2667f not found: ID does not exist" containerID="50475c62c4d4751b422d47cb52f8cbcd7ff3f7926f8005ff50db9c0a0ae2667f"
	Mar 18 13:04:55 functional-044661 kubelet[7756]: I0318 13:04:55.055760    7756 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="50475c62c4d4751b422d47cb52f8cbcd7ff3f7926f8005ff50db9c0a0ae2667f" err="rpc error: code = NotFound desc = could not find container \"50475c62c4d4751b422d47cb52f8cbcd7ff3f7926f8005ff50db9c0a0ae2667f\": container with ID starting with 50475c62c4d4751b422d47cb52f8cbcd7ff3f7926f8005ff50db9c0a0ae2667f not found: ID does not exist"
	Mar 18 13:04:57 functional-044661 kubelet[7756]: I0318 13:04:57.316187    7756 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-m9pkt" podStartSLOduration=4.089352534 podCreationTimestamp="2024-03-18 13:04:49 +0000 UTC" firstStartedPulling="2024-03-18 13:04:51.535567502 +0000 UTC m=+57.406810988" lastFinishedPulling="2024-03-18 13:04:55.762341955 +0000 UTC m=+61.633585476" observedRunningTime="2024-03-18 13:04:56.311854346 +0000 UTC m=+62.183097852" watchObservedRunningTime="2024-03-18 13:04:57.316127022 +0000 UTC m=+63.187370541"
	Mar 18 13:04:57 functional-044661 kubelet[7756]: I0318 13:04:57.317304    7756 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.917047715 podCreationTimestamp="2024-03-18 13:04:53 +0000 UTC" firstStartedPulling="2024-03-18 13:04:54.123886717 +0000 UTC m=+59.995130203" lastFinishedPulling="2024-03-18 13:04:56.524102189 +0000 UTC m=+62.395345687" observedRunningTime="2024-03-18 13:04:57.316728104 +0000 UTC m=+63.187971602" watchObservedRunningTime="2024-03-18 13:04:57.317263199 +0000 UTC m=+63.188506705"
	Mar 18 13:05:54 functional-044661 kubelet[7756]: E0318 13:05:54.517130    7756 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:05:54 functional-044661 kubelet[7756]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:05:54 functional-044661 kubelet[7756]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:05:54 functional-044661 kubelet[7756]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:05:54 functional-044661 kubelet[7756]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:06:54 functional-044661 kubelet[7756]: E0318 13:06:54.516243    7756 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:06:54 functional-044661 kubelet[7756]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:06:54 functional-044661 kubelet[7756]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:06:54 functional-044661 kubelet[7756]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:06:54 functional-044661 kubelet[7756]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:07:54 functional-044661 kubelet[7756]: E0318 13:07:54.518320    7756 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:07:54 functional-044661 kubelet[7756]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:07:54 functional-044661 kubelet[7756]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:07:54 functional-044661 kubelet[7756]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:07:54 functional-044661 kubelet[7756]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:08:54 functional-044661 kubelet[7756]: E0318 13:08:54.520041    7756 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:08:54 functional-044661 kubelet[7756]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:08:54 functional-044661 kubelet[7756]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:08:54 functional-044661 kubelet[7756]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:08:54 functional-044661 kubelet[7756]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [50b9303e127fcbd29c8fa1c0f1caeb0a41638c2d93051a318ec2784268307e67] <==
	I0318 13:04:09.064144       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:04:09.078700       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:04:09.079448       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:04:09.099222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:04:09.099418       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-044661_b703717d-3e85-4c17-9fe2-cc0c9dfb5628!
	I0318 13:04:09.099806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"028cda91-ad99-4de7-8103-60de155714fb", APIVersion:"v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-044661_b703717d-3e85-4c17-9fe2-cc0c9dfb5628 became leader
	I0318 13:04:09.201608       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-044661_b703717d-3e85-4c17-9fe2-cc0c9dfb5628!
	I0318 13:04:24.365404       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0318 13:04:24.365523       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a5342afd-15cc-470b-adb8-b1bd5af8a056 354 0 2024-03-18 13:04:08 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-18 13:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9940a92e-7b28-4602-b805-7a00d3fd4cd7 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9940a92e-7b28-4602-b805-7a00d3fd4cd7 501 0 2024-03-18 13:04:24 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-18 13:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-18 13:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0318 13:04:24.365938       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9940a92e-7b28-4602-b805-7a00d3fd4cd7" provisioned
	I0318 13:04:24.366048       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0318 13:04:24.366081       1 volume_store.go:212] Trying to save persistentvolume "pvc-9940a92e-7b28-4602-b805-7a00d3fd4cd7"
	I0318 13:04:24.367201       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9940a92e-7b28-4602-b805-7a00d3fd4cd7", APIVersion:"v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0318 13:04:24.390834       1 volume_store.go:219] persistentvolume "pvc-9940a92e-7b28-4602-b805-7a00d3fd4cd7" saved
	I0318 13:04:24.391603       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9940a92e-7b28-4602-b805-7a00d3fd4cd7", APIVersion:"v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9940a92e-7b28-4602-b805-7a00d3fd4cd7
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-044661 -n functional-044661
helpers_test.go:261: (dbg) Run:  kubectl --context functional-044661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-8694d4445c-4jj74
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-044661 describe pod busybox-mount kubernetes-dashboard-8694d4445c-4jj74
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-044661 describe pod busybox-mount kubernetes-dashboard-8694d4445c-4jj74: exit status 1 (74.311259ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-044661/192.168.39.198
	Start Time:       Mon, 18 Mar 2024 13:04:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://373346d075233044330a0abf48eb3f47a2aa5848d30012cb81b244c544a94606
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 18 Mar 2024 13:04:47 +0000
	      Finished:     Mon, 18 Mar 2024 13:04:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tmrsr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tmrsr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m14s  default-scheduler  Successfully assigned default/busybox-mount to functional-044661
	  Normal  Pulling    5m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m3s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.723s (9.55s including waiting)
	  Normal  Created    5m3s   kubelet            Created container mount-munger
	  Normal  Started    5m3s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-4jj74" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-044661 describe pod busybox-mount kubernetes-dashboard-8694d4445c-4jj74: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 node stop m02 -v=7 --alsologtostderr
E0318 13:14:58.880011 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:15:39.841106 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.516169185s)

                                                
                                                
-- stdout --
	* Stopping node "ha-942957-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:14:38.766619 1089696 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:14:38.766736 1089696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:14:38.766742 1089696 out.go:304] Setting ErrFile to fd 2...
	I0318 13:14:38.766746 1089696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:14:38.767031 1089696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:14:38.767346 1089696 mustload.go:65] Loading cluster: ha-942957
	I0318 13:14:38.767764 1089696 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:14:38.767781 1089696 stop.go:39] StopHost: ha-942957-m02
	I0318 13:14:38.768290 1089696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:38.768363 1089696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:38.786013 1089696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0318 13:14:38.786591 1089696 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:38.787263 1089696 main.go:141] libmachine: Using API Version  1
	I0318 13:14:38.787287 1089696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:38.787634 1089696 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:38.789933 1089696 out.go:177] * Stopping node "ha-942957-m02"  ...
	I0318 13:14:38.791249 1089696 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:14:38.791282 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:14:38.791525 1089696 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:14:38.791551 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:14:38.794434 1089696 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:14:38.794900 1089696 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:14:38.794930 1089696 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:14:38.795129 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:14:38.795333 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:14:38.795490 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:14:38.795639 1089696 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:14:38.883900 1089696 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:14:38.939812 1089696 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:14:38.997460 1089696 main.go:141] libmachine: Stopping "ha-942957-m02"...
	I0318 13:14:38.997501 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:14:38.999041 1089696 main.go:141] libmachine: (ha-942957-m02) Calling .Stop
	I0318 13:14:39.002412 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 0/120
	I0318 13:14:40.003823 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 1/120
	I0318 13:14:41.005739 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 2/120
	I0318 13:14:42.007379 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 3/120
	I0318 13:14:43.008889 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 4/120
	I0318 13:14:44.010892 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 5/120
	I0318 13:14:45.012373 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 6/120
	I0318 13:14:46.014463 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 7/120
	I0318 13:14:47.016083 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 8/120
	I0318 13:14:48.018340 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 9/120
	I0318 13:14:49.020674 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 10/120
	I0318 13:14:50.022301 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 11/120
	I0318 13:14:51.023533 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 12/120
	I0318 13:14:52.025796 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 13/120
	I0318 13:14:53.027201 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 14/120
	I0318 13:14:54.029443 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 15/120
	I0318 13:14:55.031006 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 16/120
	I0318 13:14:56.032560 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 17/120
	I0318 13:14:57.034321 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 18/120
	I0318 13:14:58.036373 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 19/120
	I0318 13:14:59.038419 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 20/120
	I0318 13:15:00.040117 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 21/120
	I0318 13:15:01.041737 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 22/120
	I0318 13:15:02.043516 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 23/120
	I0318 13:15:03.044963 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 24/120
	I0318 13:15:04.046969 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 25/120
	I0318 13:15:05.049203 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 26/120
	I0318 13:15:06.051809 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 27/120
	I0318 13:15:07.053982 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 28/120
	I0318 13:15:08.055669 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 29/120
	I0318 13:15:09.057964 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 30/120
	I0318 13:15:10.059265 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 31/120
	I0318 13:15:11.060659 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 32/120
	I0318 13:15:12.062325 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 33/120
	I0318 13:15:13.063856 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 34/120
	I0318 13:15:14.065987 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 35/120
	I0318 13:15:15.067452 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 36/120
	I0318 13:15:16.068760 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 37/120
	I0318 13:15:17.070630 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 38/120
	I0318 13:15:18.072127 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 39/120
	I0318 13:15:19.073668 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 40/120
	I0318 13:15:20.075495 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 41/120
	I0318 13:15:21.077348 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 42/120
	I0318 13:15:22.079006 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 43/120
	I0318 13:15:23.081455 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 44/120
	I0318 13:15:24.083721 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 45/120
	I0318 13:15:25.085090 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 46/120
	I0318 13:15:26.086689 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 47/120
	I0318 13:15:27.088873 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 48/120
	I0318 13:15:28.090325 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 49/120
	I0318 13:15:29.092600 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 50/120
	I0318 13:15:30.094669 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 51/120
	I0318 13:15:31.095975 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 52/120
	I0318 13:15:32.097437 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 53/120
	I0318 13:15:33.098751 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 54/120
	I0318 13:15:34.100925 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 55/120
	I0318 13:15:35.102276 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 56/120
	I0318 13:15:36.103977 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 57/120
	I0318 13:15:37.105597 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 58/120
	I0318 13:15:38.107163 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 59/120
	I0318 13:15:39.109404 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 60/120
	I0318 13:15:40.110822 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 61/120
	I0318 13:15:41.112310 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 62/120
	I0318 13:15:42.113798 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 63/120
	I0318 13:15:43.115398 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 64/120
	I0318 13:15:44.117352 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 65/120
	I0318 13:15:45.119105 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 66/120
	I0318 13:15:46.120705 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 67/120
	I0318 13:15:47.122591 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 68/120
	I0318 13:15:48.123992 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 69/120
	I0318 13:15:49.126509 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 70/120
	I0318 13:15:50.128887 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 71/120
	I0318 13:15:51.130489 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 72/120
	I0318 13:15:52.132285 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 73/120
	I0318 13:15:53.135010 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 74/120
	I0318 13:15:54.136872 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 75/120
	I0318 13:15:55.139652 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 76/120
	I0318 13:15:56.141256 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 77/120
	I0318 13:15:57.142966 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 78/120
	I0318 13:15:58.144708 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 79/120
	I0318 13:15:59.146962 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 80/120
	I0318 13:16:00.148605 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 81/120
	I0318 13:16:01.150924 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 82/120
	I0318 13:16:02.152601 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 83/120
	I0318 13:16:03.154563 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 84/120
	I0318 13:16:04.156939 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 85/120
	I0318 13:16:05.158965 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 86/120
	I0318 13:16:06.160592 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 87/120
	I0318 13:16:07.162410 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 88/120
	I0318 13:16:08.163707 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 89/120
	I0318 13:16:09.165865 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 90/120
	I0318 13:16:10.167521 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 91/120
	I0318 13:16:11.169004 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 92/120
	I0318 13:16:12.170443 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 93/120
	I0318 13:16:13.171907 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 94/120
	I0318 13:16:14.174078 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 95/120
	I0318 13:16:15.175497 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 96/120
	I0318 13:16:16.177460 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 97/120
	I0318 13:16:17.179138 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 98/120
	I0318 13:16:18.180926 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 99/120
	I0318 13:16:19.183029 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 100/120
	I0318 13:16:20.184548 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 101/120
	I0318 13:16:21.185847 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 102/120
	I0318 13:16:22.187221 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 103/120
	I0318 13:16:23.188700 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 104/120
	I0318 13:16:24.190956 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 105/120
	I0318 13:16:25.192426 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 106/120
	I0318 13:16:26.194560 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 107/120
	I0318 13:16:27.196233 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 108/120
	I0318 13:16:28.197713 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 109/120
	I0318 13:16:29.199536 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 110/120
	I0318 13:16:30.201804 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 111/120
	I0318 13:16:31.203494 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 112/120
	I0318 13:16:32.205158 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 113/120
	I0318 13:16:33.206592 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 114/120
	I0318 13:16:34.208914 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 115/120
	I0318 13:16:35.210662 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 116/120
	I0318 13:16:36.212016 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 117/120
	I0318 13:16:37.214850 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 118/120
	I0318 13:16:38.216322 1089696 main.go:141] libmachine: (ha-942957-m02) Waiting for machine to stop 119/120
	I0318 13:16:39.217444 1089696 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 13:16:39.217690 1089696 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-942957 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (19.120092889s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:16:39.281483 1090026 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:16:39.281623 1090026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:16:39.281634 1090026 out.go:304] Setting ErrFile to fd 2...
	I0318 13:16:39.281638 1090026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:16:39.281873 1090026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:16:39.282076 1090026 out.go:298] Setting JSON to false
	I0318 13:16:39.282130 1090026 mustload.go:65] Loading cluster: ha-942957
	I0318 13:16:39.282263 1090026 notify.go:220] Checking for updates...
	I0318 13:16:39.282633 1090026 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:16:39.282655 1090026 status.go:255] checking status of ha-942957 ...
	I0318 13:16:39.283056 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:39.283130 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:39.303248 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41713
	I0318 13:16:39.303810 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:39.304480 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:39.304509 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:39.304983 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:39.305237 1090026 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:16:39.306988 1090026 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:16:39.307009 1090026 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:16:39.307365 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:39.307419 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:39.322796 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34583
	I0318 13:16:39.323220 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:39.323685 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:39.323708 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:39.324041 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:39.324232 1090026 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:16:39.326919 1090026 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:16:39.327428 1090026 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:16:39.327451 1090026 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:16:39.327561 1090026 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:16:39.327895 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:39.327953 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:39.343300 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I0318 13:16:39.343790 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:39.344361 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:39.344386 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:39.344753 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:39.344971 1090026 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:16:39.345201 1090026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:16:39.345237 1090026 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:16:39.348024 1090026 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:16:39.348427 1090026 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:16:39.348454 1090026 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:16:39.348718 1090026 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:16:39.348911 1090026 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:16:39.349056 1090026 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:16:39.349181 1090026 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:16:39.434734 1090026 ssh_runner.go:195] Run: systemctl --version
	I0318 13:16:39.442231 1090026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:16:39.461376 1090026 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:16:39.461409 1090026 api_server.go:166] Checking apiserver status ...
	I0318 13:16:39.461451 1090026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:16:39.478347 1090026 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:16:39.488237 1090026 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:16:39.488298 1090026 ssh_runner.go:195] Run: ls
	I0318 13:16:39.493102 1090026 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:16:39.499976 1090026 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:16:39.500006 1090026 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:16:39.500018 1090026 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:16:39.500036 1090026 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:16:39.500345 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:39.500380 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:39.518323 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0318 13:16:39.518838 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:39.519360 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:39.519386 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:39.519710 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:39.519920 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:16:39.521620 1090026 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:16:39.521638 1090026 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:16:39.521979 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:39.522025 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:39.537110 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0318 13:16:39.537636 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:39.538172 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:39.538197 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:39.538550 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:39.538775 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:16:39.541705 1090026 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:16:39.542093 1090026 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:16:39.542134 1090026 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:16:39.542274 1090026 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:16:39.542578 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:39.542613 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:39.557424 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I0318 13:16:39.557860 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:39.558374 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:39.558395 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:39.558851 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:39.559035 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:16:39.559229 1090026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:16:39.559255 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:16:39.562069 1090026 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:16:39.562575 1090026 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:16:39.562599 1090026 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:16:39.562799 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:16:39.563004 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:16:39.563192 1090026 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:16:39.563364 1090026 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	W0318 13:16:57.956066 1090026 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:16:57.956209 1090026 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0318 13:16:57.956237 1090026 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:16:57.956270 1090026 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:16:57.956299 1090026 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:16:57.956312 1090026 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:16:57.956687 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:57.956745 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:57.972314 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I0318 13:16:57.972817 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:57.973376 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:57.973407 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:57.973747 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:57.973912 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:16:57.975497 1090026 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:16:57.975519 1090026 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:16:57.975848 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:57.975902 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:57.990314 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0318 13:16:57.990738 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:57.991319 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:57.991348 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:57.991672 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:57.991967 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:16:57.995033 1090026 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:16:57.995427 1090026 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:16:57.995461 1090026 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:16:57.995598 1090026 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:16:57.995966 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:57.996006 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:58.012704 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0318 13:16:58.013165 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:58.013711 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:58.013735 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:58.014069 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:58.014278 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:16:58.014486 1090026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:16:58.014516 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:16:58.017437 1090026 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:16:58.017906 1090026 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:16:58.017927 1090026 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:16:58.018099 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:16:58.018297 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:16:58.018445 1090026 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:16:58.018626 1090026 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:16:58.102197 1090026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:16:58.121934 1090026 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:16:58.121965 1090026 api_server.go:166] Checking apiserver status ...
	I0318 13:16:58.122007 1090026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:16:58.140459 1090026 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:16:58.152879 1090026 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:16:58.152960 1090026 ssh_runner.go:195] Run: ls
	I0318 13:16:58.157875 1090026 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:16:58.165514 1090026 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:16:58.165551 1090026 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:16:58.165564 1090026 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:16:58.165593 1090026 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:16:58.165919 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:58.165976 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:58.182403 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0318 13:16:58.182935 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:58.183472 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:58.183507 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:58.183938 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:58.184140 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:16:58.185890 1090026 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:16:58.185917 1090026 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:16:58.186222 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:58.186258 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:58.202206 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0318 13:16:58.202690 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:58.203176 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:58.203197 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:58.203513 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:58.203738 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:16:58.206460 1090026 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:16:58.206855 1090026 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:16:58.206893 1090026 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:16:58.207040 1090026 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:16:58.207333 1090026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:58.207370 1090026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:58.224515 1090026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33187
	I0318 13:16:58.224924 1090026 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:58.225458 1090026 main.go:141] libmachine: Using API Version  1
	I0318 13:16:58.225496 1090026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:58.225901 1090026 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:58.226080 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:16:58.226273 1090026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:16:58.226291 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:16:58.229000 1090026 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:16:58.229480 1090026 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:16:58.229506 1090026 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:16:58.229658 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:16:58.229867 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:16:58.230030 1090026 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:16:58.230184 1090026 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:16:58.317971 1090026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:16:58.337094 1090026 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-942957 -n ha-942957
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-942957 logs -n 25: (1.51999255s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957:/home/docker/cp-test_ha-942957-m03_ha-942957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957 sudo cat                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957.txt                                |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m04 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp testdata/cp-test.txt                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957:/home/docker/cp-test_ha-942957-m04_ha-942957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957 sudo cat                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957.txt                                |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03:/home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m03 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-942957 node stop m02 -v=7                                                    | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:09:51
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:09:51.591109 1085975 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:09:51.591242 1085975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:09:51.591251 1085975 out.go:304] Setting ErrFile to fd 2...
	I0318 13:09:51.591257 1085975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:09:51.591455 1085975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:09:51.592167 1085975 out.go:298] Setting JSON to false
	I0318 13:09:51.593152 1085975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17539,"bootTime":1710749853,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:09:51.593229 1085975 start.go:139] virtualization: kvm guest
	I0318 13:09:51.595884 1085975 out.go:177] * [ha-942957] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:09:51.597522 1085975 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:09:51.597591 1085975 notify.go:220] Checking for updates...
	I0318 13:09:51.599127 1085975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:09:51.600612 1085975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:09:51.602077 1085975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:09:51.603434 1085975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:09:51.604767 1085975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:09:51.606201 1085975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:09:51.642699 1085975 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 13:09:51.643964 1085975 start.go:297] selected driver: kvm2
	I0318 13:09:51.643991 1085975 start.go:901] validating driver "kvm2" against <nil>
	I0318 13:09:51.644007 1085975 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:09:51.645057 1085975 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:09:51.645143 1085975 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:09:51.660502 1085975 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:09:51.660552 1085975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:09:51.660762 1085975 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:09:51.660831 1085975 cni.go:84] Creating CNI manager for ""
	I0318 13:09:51.660847 1085975 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 13:09:51.660859 1085975 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 13:09:51.660923 1085975 start.go:340] cluster config:
	{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0318 13:09:51.661043 1085975 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:09:51.662888 1085975 out.go:177] * Starting "ha-942957" primary control-plane node in "ha-942957" cluster
	I0318 13:09:51.664159 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:09:51.664190 1085975 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:09:51.664197 1085975 cache.go:56] Caching tarball of preloaded images
	I0318 13:09:51.664270 1085975 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:09:51.664280 1085975 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:09:51.664570 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:09:51.664590 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json: {Name:mk01c7241d7a91ba57e1555d3781792f26b1c281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:51.664724 1085975 start.go:360] acquireMachinesLock for ha-942957: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:09:51.664754 1085975 start.go:364] duration metric: took 15.187µs to acquireMachinesLock for "ha-942957"
	I0318 13:09:51.664771 1085975 start.go:93] Provisioning new machine with config: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:09:51.664863 1085975 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 13:09:51.666661 1085975 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:09:51.666777 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:09:51.666818 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:09:51.681851 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0318 13:09:51.682396 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:09:51.682996 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:09:51.683028 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:09:51.683760 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:09:51.684245 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:09:51.684576 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:09:51.684958 1085975 start.go:159] libmachine.API.Create for "ha-942957" (driver="kvm2")
	I0318 13:09:51.684987 1085975 client.go:168] LocalClient.Create starting
	I0318 13:09:51.685052 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 13:09:51.685088 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:09:51.685103 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:09:51.685158 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 13:09:51.685176 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:09:51.685187 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:09:51.685202 1085975 main.go:141] libmachine: Running pre-create checks...
	I0318 13:09:51.685211 1085975 main.go:141] libmachine: (ha-942957) Calling .PreCreateCheck
	I0318 13:09:51.685617 1085975 main.go:141] libmachine: (ha-942957) Calling .GetConfigRaw
	I0318 13:09:51.686087 1085975 main.go:141] libmachine: Creating machine...
	I0318 13:09:51.686102 1085975 main.go:141] libmachine: (ha-942957) Calling .Create
	I0318 13:09:51.686253 1085975 main.go:141] libmachine: (ha-942957) Creating KVM machine...
	I0318 13:09:51.687635 1085975 main.go:141] libmachine: (ha-942957) DBG | found existing default KVM network
	I0318 13:09:51.688431 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:51.688268 1085998 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045f0}
	I0318 13:09:51.688450 1085975 main.go:141] libmachine: (ha-942957) DBG | created network xml: 
	I0318 13:09:51.688457 1085975 main.go:141] libmachine: (ha-942957) DBG | <network>
	I0318 13:09:51.688463 1085975 main.go:141] libmachine: (ha-942957) DBG |   <name>mk-ha-942957</name>
	I0318 13:09:51.688477 1085975 main.go:141] libmachine: (ha-942957) DBG |   <dns enable='no'/>
	I0318 13:09:51.688481 1085975 main.go:141] libmachine: (ha-942957) DBG |   
	I0318 13:09:51.688490 1085975 main.go:141] libmachine: (ha-942957) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 13:09:51.688495 1085975 main.go:141] libmachine: (ha-942957) DBG |     <dhcp>
	I0318 13:09:51.688504 1085975 main.go:141] libmachine: (ha-942957) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 13:09:51.688509 1085975 main.go:141] libmachine: (ha-942957) DBG |     </dhcp>
	I0318 13:09:51.688517 1085975 main.go:141] libmachine: (ha-942957) DBG |   </ip>
	I0318 13:09:51.688521 1085975 main.go:141] libmachine: (ha-942957) DBG |   
	I0318 13:09:51.688528 1085975 main.go:141] libmachine: (ha-942957) DBG | </network>
	I0318 13:09:51.688532 1085975 main.go:141] libmachine: (ha-942957) DBG | 
	I0318 13:09:51.693934 1085975 main.go:141] libmachine: (ha-942957) DBG | trying to create private KVM network mk-ha-942957 192.168.39.0/24...
	I0318 13:09:51.763790 1085975 main.go:141] libmachine: (ha-942957) DBG | private KVM network mk-ha-942957 192.168.39.0/24 created
	I0318 13:09:51.763931 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:51.763753 1085998 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:09:51.763989 1085975 main.go:141] libmachine: (ha-942957) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957 ...
	I0318 13:09:51.764008 1085975 main.go:141] libmachine: (ha-942957) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:09:51.764029 1085975 main.go:141] libmachine: (ha-942957) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:09:52.024720 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:52.024590 1085998 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa...
	I0318 13:09:52.144568 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:52.144429 1085998 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/ha-942957.rawdisk...
	I0318 13:09:52.144599 1085975 main.go:141] libmachine: (ha-942957) DBG | Writing magic tar header
	I0318 13:09:52.144609 1085975 main.go:141] libmachine: (ha-942957) DBG | Writing SSH key tar header
	I0318 13:09:52.144617 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:52.144545 1085998 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957 ...
	I0318 13:09:52.144735 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957
	I0318 13:09:52.144771 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 13:09:52.144786 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957 (perms=drwx------)
	I0318 13:09:52.144802 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:09:52.144809 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 13:09:52.144817 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 13:09:52.144824 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:09:52.144834 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:09:52.144841 1085975 main.go:141] libmachine: (ha-942957) Creating domain...
	I0318 13:09:52.144848 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:09:52.144860 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 13:09:52.144871 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:09:52.144886 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:09:52.144894 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home
	I0318 13:09:52.144907 1085975 main.go:141] libmachine: (ha-942957) DBG | Skipping /home - not owner
	I0318 13:09:52.145986 1085975 main.go:141] libmachine: (ha-942957) define libvirt domain using xml: 
	I0318 13:09:52.146006 1085975 main.go:141] libmachine: (ha-942957) <domain type='kvm'>
	I0318 13:09:52.146015 1085975 main.go:141] libmachine: (ha-942957)   <name>ha-942957</name>
	I0318 13:09:52.146023 1085975 main.go:141] libmachine: (ha-942957)   <memory unit='MiB'>2200</memory>
	I0318 13:09:52.146030 1085975 main.go:141] libmachine: (ha-942957)   <vcpu>2</vcpu>
	I0318 13:09:52.146036 1085975 main.go:141] libmachine: (ha-942957)   <features>
	I0318 13:09:52.146049 1085975 main.go:141] libmachine: (ha-942957)     <acpi/>
	I0318 13:09:52.146056 1085975 main.go:141] libmachine: (ha-942957)     <apic/>
	I0318 13:09:52.146067 1085975 main.go:141] libmachine: (ha-942957)     <pae/>
	I0318 13:09:52.146084 1085975 main.go:141] libmachine: (ha-942957)     
	I0318 13:09:52.146096 1085975 main.go:141] libmachine: (ha-942957)   </features>
	I0318 13:09:52.146106 1085975 main.go:141] libmachine: (ha-942957)   <cpu mode='host-passthrough'>
	I0318 13:09:52.146136 1085975 main.go:141] libmachine: (ha-942957)   
	I0318 13:09:52.146158 1085975 main.go:141] libmachine: (ha-942957)   </cpu>
	I0318 13:09:52.146164 1085975 main.go:141] libmachine: (ha-942957)   <os>
	I0318 13:09:52.146169 1085975 main.go:141] libmachine: (ha-942957)     <type>hvm</type>
	I0318 13:09:52.146178 1085975 main.go:141] libmachine: (ha-942957)     <boot dev='cdrom'/>
	I0318 13:09:52.146182 1085975 main.go:141] libmachine: (ha-942957)     <boot dev='hd'/>
	I0318 13:09:52.146187 1085975 main.go:141] libmachine: (ha-942957)     <bootmenu enable='no'/>
	I0318 13:09:52.146197 1085975 main.go:141] libmachine: (ha-942957)   </os>
	I0318 13:09:52.146202 1085975 main.go:141] libmachine: (ha-942957)   <devices>
	I0318 13:09:52.146216 1085975 main.go:141] libmachine: (ha-942957)     <disk type='file' device='cdrom'>
	I0318 13:09:52.146227 1085975 main.go:141] libmachine: (ha-942957)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/boot2docker.iso'/>
	I0318 13:09:52.146235 1085975 main.go:141] libmachine: (ha-942957)       <target dev='hdc' bus='scsi'/>
	I0318 13:09:52.146240 1085975 main.go:141] libmachine: (ha-942957)       <readonly/>
	I0318 13:09:52.146246 1085975 main.go:141] libmachine: (ha-942957)     </disk>
	I0318 13:09:52.146252 1085975 main.go:141] libmachine: (ha-942957)     <disk type='file' device='disk'>
	I0318 13:09:52.146260 1085975 main.go:141] libmachine: (ha-942957)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:09:52.146269 1085975 main.go:141] libmachine: (ha-942957)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/ha-942957.rawdisk'/>
	I0318 13:09:52.146277 1085975 main.go:141] libmachine: (ha-942957)       <target dev='hda' bus='virtio'/>
	I0318 13:09:52.146282 1085975 main.go:141] libmachine: (ha-942957)     </disk>
	I0318 13:09:52.146289 1085975 main.go:141] libmachine: (ha-942957)     <interface type='network'>
	I0318 13:09:52.146351 1085975 main.go:141] libmachine: (ha-942957)       <source network='mk-ha-942957'/>
	I0318 13:09:52.146384 1085975 main.go:141] libmachine: (ha-942957)       <model type='virtio'/>
	I0318 13:09:52.146397 1085975 main.go:141] libmachine: (ha-942957)     </interface>
	I0318 13:09:52.146407 1085975 main.go:141] libmachine: (ha-942957)     <interface type='network'>
	I0318 13:09:52.146421 1085975 main.go:141] libmachine: (ha-942957)       <source network='default'/>
	I0318 13:09:52.146433 1085975 main.go:141] libmachine: (ha-942957)       <model type='virtio'/>
	I0318 13:09:52.146447 1085975 main.go:141] libmachine: (ha-942957)     </interface>
	I0318 13:09:52.146458 1085975 main.go:141] libmachine: (ha-942957)     <serial type='pty'>
	I0318 13:09:52.146470 1085975 main.go:141] libmachine: (ha-942957)       <target port='0'/>
	I0318 13:09:52.146483 1085975 main.go:141] libmachine: (ha-942957)     </serial>
	I0318 13:09:52.146500 1085975 main.go:141] libmachine: (ha-942957)     <console type='pty'>
	I0318 13:09:52.146522 1085975 main.go:141] libmachine: (ha-942957)       <target type='serial' port='0'/>
	I0318 13:09:52.146546 1085975 main.go:141] libmachine: (ha-942957)     </console>
	I0318 13:09:52.146567 1085975 main.go:141] libmachine: (ha-942957)     <rng model='virtio'>
	I0318 13:09:52.146580 1085975 main.go:141] libmachine: (ha-942957)       <backend model='random'>/dev/random</backend>
	I0318 13:09:52.146591 1085975 main.go:141] libmachine: (ha-942957)     </rng>
	I0318 13:09:52.146601 1085975 main.go:141] libmachine: (ha-942957)     
	I0318 13:09:52.146608 1085975 main.go:141] libmachine: (ha-942957)     
	I0318 13:09:52.146617 1085975 main.go:141] libmachine: (ha-942957)   </devices>
	I0318 13:09:52.146627 1085975 main.go:141] libmachine: (ha-942957) </domain>
	I0318 13:09:52.146640 1085975 main.go:141] libmachine: (ha-942957) 
	I0318 13:09:52.151732 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:90:91:5f in network default
	I0318 13:09:52.152297 1085975 main.go:141] libmachine: (ha-942957) Ensuring networks are active...
	I0318 13:09:52.152314 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:52.153019 1085975 main.go:141] libmachine: (ha-942957) Ensuring network default is active
	I0318 13:09:52.153290 1085975 main.go:141] libmachine: (ha-942957) Ensuring network mk-ha-942957 is active
	I0318 13:09:52.153733 1085975 main.go:141] libmachine: (ha-942957) Getting domain xml...
	I0318 13:09:52.154447 1085975 main.go:141] libmachine: (ha-942957) Creating domain...
	I0318 13:09:53.344377 1085975 main.go:141] libmachine: (ha-942957) Waiting to get IP...
	I0318 13:09:53.346049 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:53.346865 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:53.346896 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:53.346826 1085998 retry.go:31] will retry after 210.081713ms: waiting for machine to come up
	I0318 13:09:53.558182 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:53.558686 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:53.558710 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:53.558664 1085998 retry.go:31] will retry after 330.740738ms: waiting for machine to come up
	I0318 13:09:53.891328 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:53.891798 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:53.891842 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:53.891735 1085998 retry.go:31] will retry after 436.977306ms: waiting for machine to come up
	I0318 13:09:54.330358 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:54.330771 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:54.330797 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:54.330717 1085998 retry.go:31] will retry after 370.224263ms: waiting for machine to come up
	I0318 13:09:54.702089 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:54.702599 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:54.702641 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:54.702532 1085998 retry.go:31] will retry after 678.316266ms: waiting for machine to come up
	I0318 13:09:55.382306 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:55.382740 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:55.382772 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:55.382662 1085998 retry.go:31] will retry after 772.577483ms: waiting for machine to come up
	I0318 13:09:56.156783 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:56.157216 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:56.157269 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:56.157158 1085998 retry.go:31] will retry after 1.180847447s: waiting for machine to come up
	I0318 13:09:57.339108 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:57.339478 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:57.339538 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:57.339454 1085998 retry.go:31] will retry after 1.39126661s: waiting for machine to come up
	I0318 13:09:58.733271 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:58.733673 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:58.733716 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:58.733639 1085998 retry.go:31] will retry after 1.249593638s: waiting for machine to come up
	I0318 13:09:59.985269 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:59.985791 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:59.985823 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:59.985742 1085998 retry.go:31] will retry after 1.97751072s: waiting for machine to come up
	I0318 13:10:01.964811 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:01.965279 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:01.965301 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:01.965227 1085998 retry.go:31] will retry after 1.797342776s: waiting for machine to come up
	I0318 13:10:03.765063 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:03.765536 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:03.765597 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:03.765465 1085998 retry.go:31] will retry after 3.163723566s: waiting for machine to come up
	I0318 13:10:06.931547 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:06.932156 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:06.932189 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:06.932085 1085998 retry.go:31] will retry after 2.911804479s: waiting for machine to come up
	I0318 13:10:09.847125 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:09.847512 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:09.847532 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:09.847477 1085998 retry.go:31] will retry after 5.499705405s: waiting for machine to come up
	I0318 13:10:15.351123 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.351573 1085975 main.go:141] libmachine: (ha-942957) Found IP for machine: 192.168.39.68
	I0318 13:10:15.351607 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has current primary IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.351617 1085975 main.go:141] libmachine: (ha-942957) Reserving static IP address...
	I0318 13:10:15.352085 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find host DHCP lease matching {name: "ha-942957", mac: "52:54:00:1a:d5:73", ip: "192.168.39.68"} in network mk-ha-942957
	I0318 13:10:15.427818 1085975 main.go:141] libmachine: (ha-942957) DBG | Getting to WaitForSSH function...
	I0318 13:10:15.427866 1085975 main.go:141] libmachine: (ha-942957) Reserved static IP address: 192.168.39.68
	I0318 13:10:15.427878 1085975 main.go:141] libmachine: (ha-942957) Waiting for SSH to be available...
	I0318 13:10:15.430906 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.431337 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.431373 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.431505 1085975 main.go:141] libmachine: (ha-942957) DBG | Using SSH client type: external
	I0318 13:10:15.431583 1085975 main.go:141] libmachine: (ha-942957) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa (-rw-------)
	I0318 13:10:15.431627 1085975 main.go:141] libmachine: (ha-942957) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:10:15.431646 1085975 main.go:141] libmachine: (ha-942957) DBG | About to run SSH command:
	I0318 13:10:15.431661 1085975 main.go:141] libmachine: (ha-942957) DBG | exit 0
	I0318 13:10:15.556263 1085975 main.go:141] libmachine: (ha-942957) DBG | SSH cmd err, output: <nil>: 
	I0318 13:10:15.556562 1085975 main.go:141] libmachine: (ha-942957) KVM machine creation complete!
	I0318 13:10:15.556889 1085975 main.go:141] libmachine: (ha-942957) Calling .GetConfigRaw
	I0318 13:10:15.557412 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:15.557611 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:15.557753 1085975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:10:15.557764 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:15.559252 1085975 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:10:15.559269 1085975 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:10:15.559275 1085975 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:10:15.559282 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.561521 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.561879 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.561912 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.562022 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.562203 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.562361 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.562460 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.562619 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.562881 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.562895 1085975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:10:15.667295 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:10:15.667317 1085975 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:10:15.667330 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.670424 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.670860 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.670893 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.671059 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.671284 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.671467 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.671655 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.671878 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.672126 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.672141 1085975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:10:15.776890 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:10:15.776995 1085975 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:10:15.777014 1085975 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:10:15.777025 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:10:15.777319 1085975 buildroot.go:166] provisioning hostname "ha-942957"
	I0318 13:10:15.777349 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:10:15.777553 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.780483 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.780824 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.780858 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.780963 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.781160 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.781345 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.781512 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.781680 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.781853 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.781864 1085975 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957 && echo "ha-942957" | sudo tee /etc/hostname
	I0318 13:10:15.897918 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957
	
	I0318 13:10:15.897947 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.900609 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.900915 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.900945 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.901114 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.901324 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.901479 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.901606 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.901755 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.901934 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.901957 1085975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:10:16.014910 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:10:16.014952 1085975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:10:16.014982 1085975 buildroot.go:174] setting up certificates
	I0318 13:10:16.014996 1085975 provision.go:84] configureAuth start
	I0318 13:10:16.015010 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:10:16.015393 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:16.018070 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.018424 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.018472 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.018569 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.020928 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.021259 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.021295 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.021432 1085975 provision.go:143] copyHostCerts
	I0318 13:10:16.021487 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:10:16.021547 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:10:16.021560 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:10:16.021642 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:10:16.021756 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:10:16.021791 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:10:16.021802 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:10:16.021848 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:10:16.021924 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:10:16.021949 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:10:16.021957 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:10:16.021983 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:10:16.022036 1085975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957 san=[127.0.0.1 192.168.39.68 ha-942957 localhost minikube]
	I0318 13:10:16.090965 1085975 provision.go:177] copyRemoteCerts
	I0318 13:10:16.091041 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:10:16.091071 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.093832 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.094206 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.094234 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.094396 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.094588 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.094740 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.094909 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.179035 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:10:16.179122 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:10:16.206260 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:10:16.206343 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 13:10:16.232805 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:10:16.232898 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:10:16.258882 1085975 provision.go:87] duration metric: took 243.867806ms to configureAuth
	I0318 13:10:16.258920 1085975 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:10:16.259106 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:10:16.259257 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.262345 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.262703 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.262738 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.262890 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.263145 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.263332 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.263479 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.263651 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:16.263898 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:16.263918 1085975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:10:16.540112 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:10:16.540145 1085975 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:10:16.540181 1085975 main.go:141] libmachine: (ha-942957) Calling .GetURL
	I0318 13:10:16.541605 1085975 main.go:141] libmachine: (ha-942957) DBG | Using libvirt version 6000000
	I0318 13:10:16.544127 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.544447 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.544474 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.544655 1085975 main.go:141] libmachine: Docker is up and running!
	I0318 13:10:16.544668 1085975 main.go:141] libmachine: Reticulating splines...
	I0318 13:10:16.544676 1085975 client.go:171] duration metric: took 24.859680847s to LocalClient.Create
	I0318 13:10:16.544705 1085975 start.go:167] duration metric: took 24.859747601s to libmachine.API.Create "ha-942957"
	I0318 13:10:16.544718 1085975 start.go:293] postStartSetup for "ha-942957" (driver="kvm2")
	I0318 13:10:16.544760 1085975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:10:16.544782 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.545087 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:10:16.545117 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.547499 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.547781 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.547811 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.547974 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.548212 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.548393 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.548565 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.630530 1085975 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:10:16.635219 1085975 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:10:16.635249 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:10:16.635318 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:10:16.635403 1085975 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:10:16.635418 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:10:16.635513 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:10:16.645356 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:10:16.671535 1085975 start.go:296] duration metric: took 126.799398ms for postStartSetup
	I0318 13:10:16.671605 1085975 main.go:141] libmachine: (ha-942957) Calling .GetConfigRaw
	I0318 13:10:16.672222 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:16.674659 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.674958 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.674984 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.675200 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:10:16.675373 1085975 start.go:128] duration metric: took 25.010499122s to createHost
	I0318 13:10:16.675396 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.677648 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.677985 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.678014 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.678119 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.678314 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.678480 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.678660 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.678885 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:16.679183 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:16.679217 1085975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:10:16.780941 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767416.765636505
	
	I0318 13:10:16.780974 1085975 fix.go:216] guest clock: 1710767416.765636505
	I0318 13:10:16.780982 1085975 fix.go:229] Guest: 2024-03-18 13:10:16.765636505 +0000 UTC Remote: 2024-03-18 13:10:16.67538499 +0000 UTC m=+25.134263651 (delta=90.251515ms)
	I0318 13:10:16.781023 1085975 fix.go:200] guest clock delta is within tolerance: 90.251515ms
	I0318 13:10:16.781029 1085975 start.go:83] releasing machines lock for "ha-942957", held for 25.116266785s
	I0318 13:10:16.781055 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.781369 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:16.784280 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.784707 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.784741 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.784890 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.785435 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.785650 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.785736 1085975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:10:16.785792 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.785912 1085975 ssh_runner.go:195] Run: cat /version.json
	I0318 13:10:16.785936 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.788384 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.788745 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.788773 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.788790 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.788912 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.789118 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.789225 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.789254 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.789278 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.789565 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.789553 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.789720 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.789875 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.790034 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.865129 1085975 ssh_runner.go:195] Run: systemctl --version
	I0318 13:10:16.892786 1085975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:10:17.060087 1085975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:10:17.066212 1085975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:10:17.066283 1085975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:10:17.082827 1085975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:10:17.082856 1085975 start.go:494] detecting cgroup driver to use...
	I0318 13:10:17.082932 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:10:17.099560 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:10:17.114461 1085975 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:10:17.114541 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:10:17.129682 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:10:17.144424 1085975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:10:17.260772 1085975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:10:17.396399 1085975 docker.go:233] disabling docker service ...
	I0318 13:10:17.396474 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:10:17.412052 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:10:17.426062 1085975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:10:17.565994 1085975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:10:17.682678 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:10:17.698151 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:10:17.718408 1085975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:10:17.718470 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.730543 1085975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:10:17.730628 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.742758 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.754592 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.766316 1085975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:10:17.778421 1085975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:10:17.788956 1085975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:10:17.789016 1085975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:10:17.802605 1085975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:10:17.813511 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:10:17.924526 1085975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:10:18.062906 1085975 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:10:18.062988 1085975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:10:18.068672 1085975 start.go:562] Will wait 60s for crictl version
	I0318 13:10:18.068743 1085975 ssh_runner.go:195] Run: which crictl
	I0318 13:10:18.073084 1085975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:10:18.110237 1085975 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:10:18.110330 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:10:18.140748 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:10:18.173240 1085975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:10:18.174730 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:18.177629 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:18.178081 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:18.178108 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:18.178340 1085975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:10:18.183051 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:10:18.199520 1085975 kubeadm.go:877] updating cluster {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:10:18.199651 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:10:18.199707 1085975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:10:18.242783 1085975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:10:18.242861 1085975 ssh_runner.go:195] Run: which lz4
	I0318 13:10:18.247684 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 13:10:18.247812 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:10:18.252522 1085975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:10:18.252569 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:10:19.977725 1085975 crio.go:444] duration metric: took 1.729948171s to copy over tarball
	I0318 13:10:19.977806 1085975 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:10:22.364382 1085975 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.386530945s)
	I0318 13:10:22.364430 1085975 crio.go:451] duration metric: took 2.38667205s to extract the tarball
	I0318 13:10:22.364441 1085975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:10:22.406482 1085975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:10:22.457704 1085975 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:10:22.457732 1085975 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:10:22.457743 1085975 kubeadm.go:928] updating node { 192.168.39.68 8443 v1.28.4 crio true true} ...
	I0318 13:10:22.457898 1085975 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:10:22.457986 1085975 ssh_runner.go:195] Run: crio config
	I0318 13:10:22.513985 1085975 cni.go:84] Creating CNI manager for ""
	I0318 13:10:22.514013 1085975 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:10:22.514027 1085975 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:10:22.514057 1085975 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-942957 NodeName:ha-942957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:10:22.514240 1085975 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-942957"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:10:22.514272 1085975 kube-vip.go:111] generating kube-vip config ...
	I0318 13:10:22.514327 1085975 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:10:22.533171 1085975 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:10:22.533314 1085975 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:10:22.533385 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:10:22.544052 1085975 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:10:22.544148 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 13:10:22.554787 1085975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 13:10:22.574408 1085975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:10:22.593107 1085975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 13:10:22.612295 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:10:22.631469 1085975 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:10:22.635602 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:10:22.648752 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:10:22.772280 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:10:22.798920 1085975 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.68
	I0318 13:10:22.798946 1085975 certs.go:194] generating shared ca certs ...
	I0318 13:10:22.798964 1085975 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:22.799142 1085975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:10:22.799225 1085975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:10:22.799238 1085975 certs.go:256] generating profile certs ...
	I0318 13:10:22.799314 1085975 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:10:22.799331 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt with IP's: []
	I0318 13:10:22.984629 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt ...
	I0318 13:10:22.984664 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt: {Name:mk72770fd094ac57b7f08b92822bfa33014aa130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:22.984854 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key ...
	I0318 13:10:22.984880 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key: {Name:mk92717c7fc69d31773f4ece55bb512c38949d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:22.984966 1085975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926
	I0318 13:10:22.984981 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.254]
	I0318 13:10:23.092142 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926 ...
	I0318 13:10:23.092179 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926: {Name:mkd040c2f6dabb7f5d21f0d07a1359550af09051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.092351 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926 ...
	I0318 13:10:23.092364 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926: {Name:mk754980ae12a2603c5698ed6a63aa3a63976015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.092439 1085975 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:10:23.092512 1085975 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:10:23.092563 1085975 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:10:23.092577 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt with IP's: []
	I0318 13:10:23.176564 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt ...
	I0318 13:10:23.176602 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt: {Name:mka5b3142058f0d61261c04d9ec811971eddfbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.176764 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key ...
	I0318 13:10:23.176775 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key: {Name:mk6ffe02690f2bea5be214320ff8071a59348b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.176840 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:10:23.176858 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:10:23.176868 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:10:23.176878 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:10:23.176889 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:10:23.176902 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:10:23.176912 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:10:23.176921 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:10:23.176971 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:10:23.177004 1085975 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:10:23.177013 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:10:23.177032 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:10:23.177054 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:10:23.177074 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:10:23.177109 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:10:23.177157 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.177192 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.177204 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.177843 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:10:23.206273 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:10:23.234050 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:10:23.260561 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:10:23.287344 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:10:23.313475 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:10:23.339480 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:10:23.366812 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:10:23.392858 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:10:23.419475 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:10:23.446493 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:10:23.473492 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:10:23.490650 1085975 ssh_runner.go:195] Run: openssl version
	I0318 13:10:23.496760 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:10:23.507582 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.512387 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.512466 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.518441 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:10:23.529033 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:10:23.539985 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.544610 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.544678 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.550673 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:10:23.565931 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:10:23.582080 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.588687 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.588756 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.596134 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:10:23.612992 1085975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:10:23.617614 1085975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:10:23.617705 1085975 kubeadm.go:391] StartCluster: {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:10:23.617827 1085975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:10:23.617891 1085975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:10:23.662686 1085975 cri.go:89] found id: ""
	I0318 13:10:23.662809 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 13:10:23.673191 1085975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:10:23.684085 1085975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:10:23.694399 1085975 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:10:23.694419 1085975 kubeadm.go:156] found existing configuration files:
	
	I0318 13:10:23.694463 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:10:23.703573 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:10:23.703632 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:10:23.714152 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:10:23.723161 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:10:23.723210 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:10:23.732323 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:10:23.741268 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:10:23.741326 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:10:23.750261 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:10:23.761140 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:10:23.761207 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:10:23.771686 1085975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:10:24.018539 1085975 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:10:34.924636 1085975 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:10:34.924717 1085975 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:10:34.924809 1085975 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:10:34.924952 1085975 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:10:34.925086 1085975 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:10:34.925176 1085975 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:10:34.926965 1085975 out.go:204]   - Generating certificates and keys ...
	I0318 13:10:34.927064 1085975 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:10:34.927142 1085975 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:10:34.927220 1085975 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 13:10:34.927301 1085975 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 13:10:34.927392 1085975 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 13:10:34.927467 1085975 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 13:10:34.927548 1085975 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 13:10:34.927700 1085975 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-942957 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0318 13:10:34.927785 1085975 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 13:10:34.927959 1085975 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-942957 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0318 13:10:34.928052 1085975 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 13:10:34.928141 1085975 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 13:10:34.928220 1085975 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 13:10:34.928307 1085975 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:10:34.928371 1085975 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:10:34.928439 1085975 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:10:34.928517 1085975 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:10:34.928595 1085975 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:10:34.928698 1085975 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:10:34.928791 1085975 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:10:34.930425 1085975 out.go:204]   - Booting up control plane ...
	I0318 13:10:34.930555 1085975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:10:34.930641 1085975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:10:34.930702 1085975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:10:34.930799 1085975 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:10:34.930887 1085975 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:10:34.930929 1085975 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:10:34.931053 1085975 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:10:34.931118 1085975 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.625794 seconds
	I0318 13:10:34.931213 1085975 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:10:34.931326 1085975 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:10:34.931376 1085975 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:10:34.931569 1085975 kubeadm.go:309] [mark-control-plane] Marking the node ha-942957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:10:34.931650 1085975 kubeadm.go:309] [bootstrap-token] Using token: bc0gmg.0whp06jnjk6h7olc
	I0318 13:10:34.933085 1085975 out.go:204]   - Configuring RBAC rules ...
	I0318 13:10:34.933228 1085975 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:10:34.933307 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:10:34.933482 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:10:34.933678 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:10:34.933804 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:10:34.933880 1085975 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:10:34.933989 1085975 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:10:34.934052 1085975 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:10:34.934135 1085975 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:10:34.934144 1085975 kubeadm.go:309] 
	I0318 13:10:34.934228 1085975 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:10:34.934248 1085975 kubeadm.go:309] 
	I0318 13:10:34.934342 1085975 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:10:34.934351 1085975 kubeadm.go:309] 
	I0318 13:10:34.934382 1085975 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:10:34.934466 1085975 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:10:34.934540 1085975 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:10:34.934550 1085975 kubeadm.go:309] 
	I0318 13:10:34.934634 1085975 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:10:34.934649 1085975 kubeadm.go:309] 
	I0318 13:10:34.934713 1085975 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:10:34.934721 1085975 kubeadm.go:309] 
	I0318 13:10:34.934771 1085975 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:10:34.934844 1085975 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:10:34.934963 1085975 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:10:34.934981 1085975 kubeadm.go:309] 
	I0318 13:10:34.935085 1085975 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:10:34.935186 1085975 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:10:34.935195 1085975 kubeadm.go:309] 
	I0318 13:10:34.935304 1085975 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bc0gmg.0whp06jnjk6h7olc \
	I0318 13:10:34.935432 1085975 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 13:10:34.935463 1085975 kubeadm.go:309] 	--control-plane 
	I0318 13:10:34.935473 1085975 kubeadm.go:309] 
	I0318 13:10:34.935580 1085975 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:10:34.935608 1085975 kubeadm.go:309] 
	I0318 13:10:34.935712 1085975 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bc0gmg.0whp06jnjk6h7olc \
	I0318 13:10:34.935813 1085975 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 13:10:34.935830 1085975 cni.go:84] Creating CNI manager for ""
	I0318 13:10:34.935837 1085975 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:10:34.937520 1085975 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 13:10:34.939260 1085975 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 13:10:34.960039 1085975 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 13:10:34.960065 1085975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 13:10:34.990411 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 13:10:36.003170 1085975 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.012709402s)
	I0318 13:10:36.003232 1085975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:10:36.003350 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:36.003355 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-942957 minikube.k8s.io/updated_at=2024_03_18T13_10_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=ha-942957 minikube.k8s.io/primary=true
	I0318 13:10:36.023685 1085975 ops.go:34] apiserver oom_adj: -16
	I0318 13:10:36.202362 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:36.703150 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:37.203113 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:37.703030 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:38.203008 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:38.702557 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:39.203107 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:39.703141 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:40.203292 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:40.703258 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:41.202454 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:41.703077 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:42.203366 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:42.702766 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:43.202571 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:43.702456 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:44.203352 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:44.702541 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:45.202497 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:45.703278 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:46.202912 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:46.702932 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:47.202576 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:47.702392 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:48.203053 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:48.354627 1085975 kubeadm.go:1107] duration metric: took 12.351352858s to wait for elevateKubeSystemPrivileges
	W0318 13:10:48.354673 1085975 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:10:48.354683 1085975 kubeadm.go:393] duration metric: took 24.736991777s to StartCluster
	I0318 13:10:48.354709 1085975 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:48.354797 1085975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:10:48.355897 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:48.356178 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 13:10:48.356214 1085975 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:10:48.356246 1085975 start.go:240] waiting for startup goroutines ...
	I0318 13:10:48.356261 1085975 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:10:48.356324 1085975 addons.go:69] Setting storage-provisioner=true in profile "ha-942957"
	I0318 13:10:48.356339 1085975 addons.go:69] Setting default-storageclass=true in profile "ha-942957"
	I0318 13:10:48.356360 1085975 addons.go:234] Setting addon storage-provisioner=true in "ha-942957"
	I0318 13:10:48.356378 1085975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-942957"
	I0318 13:10:48.356392 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:10:48.356484 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:10:48.356836 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.356846 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.356872 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.356873 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.372994 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42403
	I0318 13:10:48.373360 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I0318 13:10:48.373518 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.373798 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.374111 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.374137 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.374351 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.374379 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.374484 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.374749 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:48.374779 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.375272 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.375297 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.377322 1085975 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:10:48.377630 1085975 kapi.go:59] client config for ha-942957: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:10:48.378136 1085975 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 13:10:48.378345 1085975 addons.go:234] Setting addon default-storageclass=true in "ha-942957"
	I0318 13:10:48.378390 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:10:48.378655 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.378687 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.391816 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0318 13:10:48.392348 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.392960 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.392988 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.393323 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.393517 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:48.394441 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0318 13:10:48.394885 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.395409 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.395427 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.395449 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:48.397759 1085975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:10:48.395842 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.399386 1085975 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:10:48.399405 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:10:48.399427 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:48.399996 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.400062 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.402534 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.402992 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:48.403022 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.403163 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:48.403412 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:48.403601 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:48.403799 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:48.416542 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I0318 13:10:48.417035 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.417591 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.417620 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.417994 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.418255 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:48.420156 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:48.420462 1085975 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:10:48.420478 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:10:48.420496 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:48.423448 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.423931 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:48.424000 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.424193 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:48.424828 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:48.425071 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:48.425281 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:48.520709 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 13:10:48.528875 1085975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:10:48.595369 1085975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:10:49.206587 1085975 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 13:10:49.401036 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401069 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401163 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401192 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401466 1085975 main.go:141] libmachine: (ha-942957) DBG | Closing plugin on server side
	I0318 13:10:49.401513 1085975 main.go:141] libmachine: (ha-942957) DBG | Closing plugin on server side
	I0318 13:10:49.401546 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401549 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401564 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.401567 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.401579 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401596 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401610 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401623 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401849 1085975 main.go:141] libmachine: (ha-942957) DBG | Closing plugin on server side
	I0318 13:10:49.401866 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401876 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.401878 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401892 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.402014 1085975 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 13:10:49.402038 1085975 round_trippers.go:469] Request Headers:
	I0318 13:10:49.402048 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.402054 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:10:49.415900 1085975 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 13:10:49.416844 1085975 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 13:10:49.416868 1085975 round_trippers.go:469] Request Headers:
	I0318 13:10:49.416879 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.416886 1085975 round_trippers.go:473]     Content-Type: application/json
	I0318 13:10:49.416899 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:10:49.420848 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:49.421029 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.421044 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.421363 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.421401 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.423324 1085975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 13:10:49.424600 1085975 addons.go:505] duration metric: took 1.06833728s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 13:10:49.424640 1085975 start.go:245] waiting for cluster config update ...
	I0318 13:10:49.424667 1085975 start.go:254] writing updated cluster config ...
	I0318 13:10:49.426314 1085975 out.go:177] 
	I0318 13:10:49.427664 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:10:49.427746 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:10:49.429489 1085975 out.go:177] * Starting "ha-942957-m02" control-plane node in "ha-942957" cluster
	I0318 13:10:49.431012 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:10:49.431043 1085975 cache.go:56] Caching tarball of preloaded images
	I0318 13:10:49.431145 1085975 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:10:49.431168 1085975 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:10:49.431256 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:10:49.431470 1085975 start.go:360] acquireMachinesLock for ha-942957-m02: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:10:49.431536 1085975 start.go:364] duration metric: took 41.802µs to acquireMachinesLock for "ha-942957-m02"
	I0318 13:10:49.431561 1085975 start.go:93] Provisioning new machine with config: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:10:49.431633 1085975 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0318 13:10:49.433368 1085975 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:10:49.433456 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:49.433489 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:49.448574 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I0318 13:10:49.449008 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:49.449473 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:49.449496 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:49.449865 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:49.450041 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:10:49.450164 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:10:49.450326 1085975 start.go:159] libmachine.API.Create for "ha-942957" (driver="kvm2")
	I0318 13:10:49.450350 1085975 client.go:168] LocalClient.Create starting
	I0318 13:10:49.450391 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 13:10:49.450437 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:10:49.450453 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:10:49.450510 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 13:10:49.450529 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:10:49.450537 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:10:49.450553 1085975 main.go:141] libmachine: Running pre-create checks...
	I0318 13:10:49.450561 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .PreCreateCheck
	I0318 13:10:49.450724 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetConfigRaw
	I0318 13:10:49.451100 1085975 main.go:141] libmachine: Creating machine...
	I0318 13:10:49.451114 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .Create
	I0318 13:10:49.451293 1085975 main.go:141] libmachine: (ha-942957-m02) Creating KVM machine...
	I0318 13:10:49.452592 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found existing default KVM network
	I0318 13:10:49.452706 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found existing private KVM network mk-ha-942957
	I0318 13:10:49.452886 1085975 main.go:141] libmachine: (ha-942957-m02) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02 ...
	I0318 13:10:49.452910 1085975 main.go:141] libmachine: (ha-942957-m02) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:10:49.452978 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.452877 1086314 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:10:49.453055 1085975 main.go:141] libmachine: (ha-942957-m02) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:10:49.729200 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.729032 1086314 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa...
	I0318 13:10:49.888681 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.888533 1086314 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/ha-942957-m02.rawdisk...
	I0318 13:10:49.888717 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Writing magic tar header
	I0318 13:10:49.888730 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Writing SSH key tar header
	I0318 13:10:49.888743 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.888673 1086314 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02 ...
	I0318 13:10:49.888875 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02
	I0318 13:10:49.888903 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02 (perms=drwx------)
	I0318 13:10:49.888914 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 13:10:49.888931 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:10:49.888944 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 13:10:49.888956 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:10:49.888966 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:10:49.888996 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home
	I0318 13:10:49.889012 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Skipping /home - not owner
	I0318 13:10:49.889020 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:10:49.889032 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 13:10:49.889045 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 13:10:49.889061 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:10:49.889074 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:10:49.889086 1085975 main.go:141] libmachine: (ha-942957-m02) Creating domain...
	I0318 13:10:49.889944 1085975 main.go:141] libmachine: (ha-942957-m02) define libvirt domain using xml: 
	I0318 13:10:49.889968 1085975 main.go:141] libmachine: (ha-942957-m02) <domain type='kvm'>
	I0318 13:10:49.889979 1085975 main.go:141] libmachine: (ha-942957-m02)   <name>ha-942957-m02</name>
	I0318 13:10:49.889986 1085975 main.go:141] libmachine: (ha-942957-m02)   <memory unit='MiB'>2200</memory>
	I0318 13:10:49.889994 1085975 main.go:141] libmachine: (ha-942957-m02)   <vcpu>2</vcpu>
	I0318 13:10:49.890000 1085975 main.go:141] libmachine: (ha-942957-m02)   <features>
	I0318 13:10:49.890008 1085975 main.go:141] libmachine: (ha-942957-m02)     <acpi/>
	I0318 13:10:49.890015 1085975 main.go:141] libmachine: (ha-942957-m02)     <apic/>
	I0318 13:10:49.890023 1085975 main.go:141] libmachine: (ha-942957-m02)     <pae/>
	I0318 13:10:49.890031 1085975 main.go:141] libmachine: (ha-942957-m02)     
	I0318 13:10:49.890065 1085975 main.go:141] libmachine: (ha-942957-m02)   </features>
	I0318 13:10:49.890102 1085975 main.go:141] libmachine: (ha-942957-m02)   <cpu mode='host-passthrough'>
	I0318 13:10:49.890115 1085975 main.go:141] libmachine: (ha-942957-m02)   
	I0318 13:10:49.890122 1085975 main.go:141] libmachine: (ha-942957-m02)   </cpu>
	I0318 13:10:49.890151 1085975 main.go:141] libmachine: (ha-942957-m02)   <os>
	I0318 13:10:49.890163 1085975 main.go:141] libmachine: (ha-942957-m02)     <type>hvm</type>
	I0318 13:10:49.890235 1085975 main.go:141] libmachine: (ha-942957-m02)     <boot dev='cdrom'/>
	I0318 13:10:49.890282 1085975 main.go:141] libmachine: (ha-942957-m02)     <boot dev='hd'/>
	I0318 13:10:49.890293 1085975 main.go:141] libmachine: (ha-942957-m02)     <bootmenu enable='no'/>
	I0318 13:10:49.890300 1085975 main.go:141] libmachine: (ha-942957-m02)   </os>
	I0318 13:10:49.890306 1085975 main.go:141] libmachine: (ha-942957-m02)   <devices>
	I0318 13:10:49.890313 1085975 main.go:141] libmachine: (ha-942957-m02)     <disk type='file' device='cdrom'>
	I0318 13:10:49.890321 1085975 main.go:141] libmachine: (ha-942957-m02)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/boot2docker.iso'/>
	I0318 13:10:49.890333 1085975 main.go:141] libmachine: (ha-942957-m02)       <target dev='hdc' bus='scsi'/>
	I0318 13:10:49.890339 1085975 main.go:141] libmachine: (ha-942957-m02)       <readonly/>
	I0318 13:10:49.890345 1085975 main.go:141] libmachine: (ha-942957-m02)     </disk>
	I0318 13:10:49.890354 1085975 main.go:141] libmachine: (ha-942957-m02)     <disk type='file' device='disk'>
	I0318 13:10:49.890365 1085975 main.go:141] libmachine: (ha-942957-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:10:49.890404 1085975 main.go:141] libmachine: (ha-942957-m02)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/ha-942957-m02.rawdisk'/>
	I0318 13:10:49.890429 1085975 main.go:141] libmachine: (ha-942957-m02)       <target dev='hda' bus='virtio'/>
	I0318 13:10:49.890440 1085975 main.go:141] libmachine: (ha-942957-m02)     </disk>
	I0318 13:10:49.890451 1085975 main.go:141] libmachine: (ha-942957-m02)     <interface type='network'>
	I0318 13:10:49.890466 1085975 main.go:141] libmachine: (ha-942957-m02)       <source network='mk-ha-942957'/>
	I0318 13:10:49.890478 1085975 main.go:141] libmachine: (ha-942957-m02)       <model type='virtio'/>
	I0318 13:10:49.890489 1085975 main.go:141] libmachine: (ha-942957-m02)     </interface>
	I0318 13:10:49.890500 1085975 main.go:141] libmachine: (ha-942957-m02)     <interface type='network'>
	I0318 13:10:49.890523 1085975 main.go:141] libmachine: (ha-942957-m02)       <source network='default'/>
	I0318 13:10:49.890545 1085975 main.go:141] libmachine: (ha-942957-m02)       <model type='virtio'/>
	I0318 13:10:49.890558 1085975 main.go:141] libmachine: (ha-942957-m02)     </interface>
	I0318 13:10:49.890568 1085975 main.go:141] libmachine: (ha-942957-m02)     <serial type='pty'>
	I0318 13:10:49.890595 1085975 main.go:141] libmachine: (ha-942957-m02)       <target port='0'/>
	I0318 13:10:49.890606 1085975 main.go:141] libmachine: (ha-942957-m02)     </serial>
	I0318 13:10:49.890618 1085975 main.go:141] libmachine: (ha-942957-m02)     <console type='pty'>
	I0318 13:10:49.890630 1085975 main.go:141] libmachine: (ha-942957-m02)       <target type='serial' port='0'/>
	I0318 13:10:49.890645 1085975 main.go:141] libmachine: (ha-942957-m02)     </console>
	I0318 13:10:49.890669 1085975 main.go:141] libmachine: (ha-942957-m02)     <rng model='virtio'>
	I0318 13:10:49.890692 1085975 main.go:141] libmachine: (ha-942957-m02)       <backend model='random'>/dev/random</backend>
	I0318 13:10:49.890709 1085975 main.go:141] libmachine: (ha-942957-m02)     </rng>
	I0318 13:10:49.890721 1085975 main.go:141] libmachine: (ha-942957-m02)     
	I0318 13:10:49.890730 1085975 main.go:141] libmachine: (ha-942957-m02)     
	I0318 13:10:49.890739 1085975 main.go:141] libmachine: (ha-942957-m02)   </devices>
	I0318 13:10:49.890750 1085975 main.go:141] libmachine: (ha-942957-m02) </domain>
	I0318 13:10:49.890765 1085975 main.go:141] libmachine: (ha-942957-m02) 
	I0318 13:10:49.897843 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:43:7a:a2 in network default
	I0318 13:10:49.898368 1085975 main.go:141] libmachine: (ha-942957-m02) Ensuring networks are active...
	I0318 13:10:49.898395 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:49.899121 1085975 main.go:141] libmachine: (ha-942957-m02) Ensuring network default is active
	I0318 13:10:49.899508 1085975 main.go:141] libmachine: (ha-942957-m02) Ensuring network mk-ha-942957 is active
	I0318 13:10:49.899822 1085975 main.go:141] libmachine: (ha-942957-m02) Getting domain xml...
	I0318 13:10:49.900586 1085975 main.go:141] libmachine: (ha-942957-m02) Creating domain...
	I0318 13:10:51.153496 1085975 main.go:141] libmachine: (ha-942957-m02) Waiting to get IP...
	I0318 13:10:51.154559 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:51.154977 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:51.155059 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:51.155004 1086314 retry.go:31] will retry after 304.73384ms: waiting for machine to come up
	I0318 13:10:51.461750 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:51.462228 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:51.462273 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:51.462153 1086314 retry.go:31] will retry after 316.844478ms: waiting for machine to come up
	I0318 13:10:51.782145 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:51.782615 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:51.782641 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:51.782559 1086314 retry.go:31] will retry after 484.230769ms: waiting for machine to come up
	I0318 13:10:52.268240 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:52.268810 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:52.268836 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:52.268772 1086314 retry.go:31] will retry after 523.434483ms: waiting for machine to come up
	I0318 13:10:52.793578 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:52.793983 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:52.794011 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:52.793928 1086314 retry.go:31] will retry after 497.999879ms: waiting for machine to come up
	I0318 13:10:53.293455 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:53.293955 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:53.293986 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:53.293916 1086314 retry.go:31] will retry after 673.425463ms: waiting for machine to come up
	I0318 13:10:53.969019 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:53.969485 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:53.969513 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:53.969422 1086314 retry.go:31] will retry after 847.284583ms: waiting for machine to come up
	I0318 13:10:54.818953 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:54.819333 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:54.819367 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:54.819304 1086314 retry.go:31] will retry after 1.325118174s: waiting for machine to come up
	I0318 13:10:56.145864 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:56.146313 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:56.146345 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:56.146257 1086314 retry.go:31] will retry after 1.795876809s: waiting for machine to come up
	I0318 13:10:57.944232 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:57.944761 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:57.944805 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:57.944713 1086314 retry.go:31] will retry after 1.744054736s: waiting for machine to come up
	I0318 13:10:59.691017 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:59.691544 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:59.691576 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:59.691495 1086314 retry.go:31] will retry after 2.51806491s: waiting for machine to come up
	I0318 13:11:02.212991 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:02.213429 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:11:02.213457 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:11:02.213377 1086314 retry.go:31] will retry after 2.637821328s: waiting for machine to come up
	I0318 13:11:04.852429 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:04.853031 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:11:04.853062 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:11:04.852991 1086314 retry.go:31] will retry after 3.347642909s: waiting for machine to come up
	I0318 13:11:08.204516 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:08.204861 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:11:08.204887 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:11:08.204815 1086314 retry.go:31] will retry after 5.549852077s: waiting for machine to come up
	I0318 13:11:13.760003 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.760478 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.760511 1085975 main.go:141] libmachine: (ha-942957-m02) Found IP for machine: 192.168.39.22
	I0318 13:11:13.760526 1085975 main.go:141] libmachine: (ha-942957-m02) Reserving static IP address...
	I0318 13:11:13.760873 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find host DHCP lease matching {name: "ha-942957-m02", mac: "52:54:00:20:c9:87", ip: "192.168.39.22"} in network mk-ha-942957
	I0318 13:11:13.838869 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Getting to WaitForSSH function...
	I0318 13:11:13.838902 1085975 main.go:141] libmachine: (ha-942957-m02) Reserved static IP address: 192.168.39.22
	I0318 13:11:13.838914 1085975 main.go:141] libmachine: (ha-942957-m02) Waiting for SSH to be available...
	I0318 13:11:13.841898 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.842346 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:13.842371 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.842503 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Using SSH client type: external
	I0318 13:11:13.842529 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa (-rw-------)
	I0318 13:11:13.842557 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:11:13.842588 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | About to run SSH command:
	I0318 13:11:13.842600 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | exit 0
	I0318 13:11:13.967996 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | SSH cmd err, output: <nil>: 
	I0318 13:11:13.968301 1085975 main.go:141] libmachine: (ha-942957-m02) KVM machine creation complete!
	I0318 13:11:13.968615 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetConfigRaw
	I0318 13:11:13.969177 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:13.969423 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:13.969620 1085975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:11:13.969635 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:11:13.970999 1085975 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:11:13.971014 1085975 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:11:13.971020 1085975 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:11:13.971026 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:13.973564 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.973927 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:13.973959 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.974086 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:13.974255 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:13.974426 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:13.974574 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:13.974746 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:13.975017 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:13.975034 1085975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:11:14.079477 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:11:14.079502 1085975 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:11:14.079511 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.082270 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.082612 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.082646 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.082882 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.083098 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.083251 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.083391 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.083543 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.083762 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.083775 1085975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:11:14.189208 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:11:14.189290 1085975 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:11:14.189297 1085975 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:11:14.189305 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:11:14.189643 1085975 buildroot.go:166] provisioning hostname "ha-942957-m02"
	I0318 13:11:14.189681 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:11:14.189889 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.192754 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.193121 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.193167 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.193313 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.193508 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.193730 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.193907 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.194106 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.194327 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.194346 1085975 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957-m02 && echo "ha-942957-m02" | sudo tee /etc/hostname
	I0318 13:11:14.315415 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957-m02
	
	I0318 13:11:14.315443 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.318088 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.318455 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.318488 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.318653 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.318890 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.319045 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.319152 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.319373 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.319598 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.319617 1085975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:11:14.442263 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:11:14.442300 1085975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:11:14.442322 1085975 buildroot.go:174] setting up certificates
	I0318 13:11:14.442333 1085975 provision.go:84] configureAuth start
	I0318 13:11:14.442343 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:11:14.442679 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:14.445488 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.445885 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.445912 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.446082 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.448758 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.449199 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.449231 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.449354 1085975 provision.go:143] copyHostCerts
	I0318 13:11:14.449388 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:11:14.449430 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:11:14.449442 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:11:14.449524 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:11:14.449636 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:11:14.449661 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:11:14.449669 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:11:14.449708 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:11:14.449786 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:11:14.449815 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:11:14.449824 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:11:14.449861 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:11:14.449945 1085975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957-m02 san=[127.0.0.1 192.168.39.22 ha-942957-m02 localhost minikube]
	I0318 13:11:14.734550 1085975 provision.go:177] copyRemoteCerts
	I0318 13:11:14.734648 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:11:14.734686 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.737413 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.737766 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.737801 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.737957 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.738194 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.738412 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.738568 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:14.823317 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:11:14.823424 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:11:14.849854 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:11:14.849947 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 13:11:14.876765 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:11:14.876861 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:11:14.903102 1085975 provision.go:87] duration metric: took 460.755262ms to configureAuth
	I0318 13:11:14.903140 1085975 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:11:14.903369 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:11:14.903473 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.906201 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.906520 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.906557 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.906669 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.906899 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.907068 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.907201 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.907379 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.907563 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.907578 1085975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:11:15.186532 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:11:15.186574 1085975 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:11:15.186586 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetURL
	I0318 13:11:15.188285 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Using libvirt version 6000000
	I0318 13:11:15.190769 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.191366 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.191400 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.191617 1085975 main.go:141] libmachine: Docker is up and running!
	I0318 13:11:15.191641 1085975 main.go:141] libmachine: Reticulating splines...
	I0318 13:11:15.191651 1085975 client.go:171] duration metric: took 25.741291565s to LocalClient.Create
	I0318 13:11:15.191697 1085975 start.go:167] duration metric: took 25.74137213s to libmachine.API.Create "ha-942957"
	I0318 13:11:15.191710 1085975 start.go:293] postStartSetup for "ha-942957-m02" (driver="kvm2")
	I0318 13:11:15.191724 1085975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:11:15.191766 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.192104 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:11:15.192136 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:15.194725 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.195138 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.195180 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.195321 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.195571 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.195751 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.195928 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:15.282716 1085975 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:11:15.287369 1085975 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:11:15.287401 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:11:15.287470 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:11:15.287543 1085975 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:11:15.287555 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:11:15.287636 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:11:15.297867 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:11:15.324585 1085975 start.go:296] duration metric: took 132.860177ms for postStartSetup
	I0318 13:11:15.324662 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetConfigRaw
	I0318 13:11:15.325299 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:15.327886 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.328282 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.328318 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.328584 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:11:15.328841 1085975 start.go:128] duration metric: took 25.897193359s to createHost
	I0318 13:11:15.328875 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:15.330988 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.331414 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.331443 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.331557 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.331765 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.331959 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.332072 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.332204 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:15.332383 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:15.332396 1085975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:11:15.436627 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767475.410528309
	
	I0318 13:11:15.436659 1085975 fix.go:216] guest clock: 1710767475.410528309
	I0318 13:11:15.436670 1085975 fix.go:229] Guest: 2024-03-18 13:11:15.410528309 +0000 UTC Remote: 2024-03-18 13:11:15.32885812 +0000 UTC m=+83.787736789 (delta=81.670189ms)
	I0318 13:11:15.436693 1085975 fix.go:200] guest clock delta is within tolerance: 81.670189ms
	I0318 13:11:15.436699 1085975 start.go:83] releasing machines lock for "ha-942957-m02", held for 26.005152464s
	I0318 13:11:15.436732 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.437022 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:15.439753 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.440231 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.440262 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.443148 1085975 out.go:177] * Found network options:
	I0318 13:11:15.444848 1085975 out.go:177]   - NO_PROXY=192.168.39.68
	W0318 13:11:15.446278 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:11:15.446312 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.446913 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.447126 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.447226 1085975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:11:15.447271 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	W0318 13:11:15.447383 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:11:15.447492 1085975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:11:15.447518 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:15.450153 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450259 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450612 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.450656 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450681 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.450702 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450767 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.450930 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.451007 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.451222 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.451235 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.451373 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.451380 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:15.451511 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:15.687134 1085975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:11:15.694155 1085975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:11:15.694234 1085975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:11:15.711687 1085975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:11:15.711720 1085975 start.go:494] detecting cgroup driver to use...
	I0318 13:11:15.711808 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:11:15.734540 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:11:15.750975 1085975 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:11:15.751061 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:11:15.767571 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:11:15.784124 1085975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:11:15.911047 1085975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:11:16.068271 1085975 docker.go:233] disabling docker service ...
	I0318 13:11:16.068357 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:11:16.083266 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:11:16.096925 1085975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:11:16.222985 1085975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:11:16.346650 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:11:16.362877 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:11:16.383435 1085975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:11:16.383514 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.395001 1085975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:11:16.395092 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.406297 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.417592 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.428964 1085975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:11:16.442564 1085975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:11:16.453040 1085975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:11:16.453116 1085975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:11:16.467808 1085975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:11:16.478795 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:11:16.591469 1085975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:11:16.753636 1085975 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:11:16.753740 1085975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:11:16.760573 1085975 start.go:562] Will wait 60s for crictl version
	I0318 13:11:16.760654 1085975 ssh_runner.go:195] Run: which crictl
	I0318 13:11:16.764828 1085975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:11:16.806750 1085975 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:11:16.806834 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:11:16.839735 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:11:16.874776 1085975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:11:16.876760 1085975 out.go:177]   - env NO_PROXY=192.168.39.68
	I0318 13:11:16.878161 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:16.880934 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:16.881244 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:16.881275 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:16.881461 1085975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:11:16.885882 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:11:16.899647 1085975 mustload.go:65] Loading cluster: ha-942957
	I0318 13:11:16.899899 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:11:16.900251 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:11:16.900290 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:11:16.915276 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0318 13:11:16.915848 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:11:16.916403 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:11:16.916431 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:11:16.916756 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:11:16.916967 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:11:16.918424 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:11:16.918730 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:11:16.918756 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:11:16.934538 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I0318 13:11:16.935009 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:11:16.935483 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:11:16.935504 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:11:16.935928 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:11:16.936174 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:11:16.936354 1085975 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.22
	I0318 13:11:16.936370 1085975 certs.go:194] generating shared ca certs ...
	I0318 13:11:16.936388 1085975 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:11:16.936572 1085975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:11:16.936647 1085975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:11:16.936664 1085975 certs.go:256] generating profile certs ...
	I0318 13:11:16.936761 1085975 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:11:16.936790 1085975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969
	I0318 13:11:16.936813 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.22 192.168.39.254]
	I0318 13:11:17.106959 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969 ...
	I0318 13:11:17.107000 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969: {Name:mk47891d09d3218143fd117c3b834e8a2af0c3c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:11:17.107204 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969 ...
	I0318 13:11:17.107228 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969: {Name:mka2d870b8258374f0d23ed255f4b0a26e71e372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:11:17.107334 1085975 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:11:17.107522 1085975 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:11:17.107699 1085975 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:11:17.107720 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:11:17.107741 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:11:17.107761 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:11:17.107780 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:11:17.107796 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:11:17.107812 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:11:17.107855 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:11:17.107876 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:11:17.107947 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:11:17.107995 1085975 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:11:17.108009 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:11:17.108044 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:11:17.108075 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:11:17.108108 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:11:17.108167 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:11:17.108201 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.108221 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.108238 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.108283 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:11:17.111760 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:17.112280 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:11:17.112308 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:17.112503 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:11:17.112707 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:11:17.112883 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:11:17.113061 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:11:17.188296 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 13:11:17.194736 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 13:11:17.211739 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 13:11:17.218802 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 13:11:17.231248 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 13:11:17.236559 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 13:11:17.249096 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 13:11:17.254541 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 13:11:17.273937 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 13:11:17.278978 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 13:11:17.291903 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 13:11:17.296810 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 13:11:17.309619 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:11:17.338014 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:11:17.364671 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:11:17.391267 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:11:17.418320 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 13:11:17.445903 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:11:17.472474 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:11:17.501830 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:11:17.528982 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:11:17.560200 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:11:17.587636 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:11:17.615376 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 13:11:17.636805 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 13:11:17.656270 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 13:11:17.674993 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 13:11:17.693944 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 13:11:17.712572 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 13:11:17.730647 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 13:11:17.749157 1085975 ssh_runner.go:195] Run: openssl version
	I0318 13:11:17.755339 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:11:17.766839 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.772000 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.772067 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.778092 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:11:17.789516 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:11:17.801214 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.806217 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.806292 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.812289 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:11:17.824223 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:11:17.835996 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.841063 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.841156 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.847062 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:11:17.858651 1085975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:11:17.863636 1085975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:11:17.863694 1085975 kubeadm.go:928] updating node {m02 192.168.39.22 8443 v1.28.4 crio true true} ...
	I0318 13:11:17.863781 1085975 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:11:17.863807 1085975 kube-vip.go:111] generating kube-vip config ...
	I0318 13:11:17.863872 1085975 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:11:17.882420 1085975 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:11:17.882502 1085975 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:11:17.882568 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:11:17.893248 1085975 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 13:11:17.893338 1085975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 13:11:17.903883 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 13:11:17.903931 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:11:17.903981 1085975 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0318 13:11:17.904009 1085975 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0318 13:11:17.904062 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:11:17.908907 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 13:11:17.908946 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 13:11:18.719373 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:11:18.719461 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:11:18.724692 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 13:11:18.724730 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 13:11:19.428440 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:11:19.443936 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:11:19.444038 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:11:19.448789 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 13:11:19.448829 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 13:11:19.941599 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 13:11:19.951962 1085975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0318 13:11:19.970095 1085975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:11:19.989398 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:11:20.008620 1085975 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:11:20.013237 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:11:20.027096 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:11:20.167553 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:11:20.185859 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:11:20.186296 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:11:20.186337 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:11:20.202883 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0318 13:11:20.203406 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:11:20.203982 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:11:20.204011 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:11:20.204340 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:11:20.204519 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:11:20.204705 1085975 start.go:316] joinCluster: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:11:20.204830 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 13:11:20.204850 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:11:20.208445 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:20.208966 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:11:20.208998 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:20.209148 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:11:20.209377 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:11:20.209525 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:11:20.209766 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:11:20.378572 1085975 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:11:20.378656 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j016fd.03qgv2nms34rlin2 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I0318 13:12:01.574184 1085975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j016fd.03qgv2nms34rlin2 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (41.195478751s)
	I0318 13:12:01.574238 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 13:12:02.046655 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-942957-m02 minikube.k8s.io/updated_at=2024_03_18T13_12_02_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=ha-942957 minikube.k8s.io/primary=false
	I0318 13:12:02.208529 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-942957-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 13:12:02.340804 1085975 start.go:318] duration metric: took 42.136091091s to joinCluster
	I0318 13:12:02.340915 1085975 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:12:02.342460 1085975 out.go:177] * Verifying Kubernetes components...
	I0318 13:12:02.341244 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:02.344035 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:02.534197 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:12:02.563224 1085975 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:12:02.563519 1085975 kapi.go:59] client config for ha-942957: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 13:12:02.563585 1085975 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0318 13:12:02.563853 1085975 node_ready.go:35] waiting up to 6m0s for node "ha-942957-m02" to be "Ready" ...
	I0318 13:12:02.563982 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:02.563992 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:02.564004 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:02.564012 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:02.574687 1085975 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 13:12:03.064611 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:03.064638 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:03.064648 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:03.064652 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:03.068752 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:03.564159 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:03.564184 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:03.564192 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:03.564195 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:03.568404 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:04.064588 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:04.064619 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:04.064631 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:04.064638 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:04.068822 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:04.565106 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:04.565138 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:04.565150 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:04.565156 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:04.570251 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:04.571105 1085975 node_ready.go:53] node "ha-942957-m02" has status "Ready":"False"
	I0318 13:12:05.065135 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:05.065159 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:05.065168 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:05.065172 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:05.069201 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:05.564819 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:05.564842 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:05.564851 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:05.564857 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:05.568616 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:06.064804 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:06.064830 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:06.064839 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:06.064845 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:06.068773 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:06.564987 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:06.565014 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:06.565024 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:06.565029 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:06.570196 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:07.064990 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:07.065047 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:07.065079 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:07.065086 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:07.069615 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:07.070403 1085975 node_ready.go:53] node "ha-942957-m02" has status "Ready":"False"
	I0318 13:12:07.564765 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:07.564792 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:07.564803 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:07.564808 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:07.569145 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.064191 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:08.064225 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.064237 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.064243 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.068300 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.564610 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:08.564637 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.564645 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.564649 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.569535 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.570588 1085975 node_ready.go:49] node "ha-942957-m02" has status "Ready":"True"
	I0318 13:12:08.570621 1085975 node_ready.go:38] duration metric: took 6.006728756s for node "ha-942957-m02" to be "Ready" ...
	I0318 13:12:08.570633 1085975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:12:08.570743 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:08.570757 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.570768 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.570772 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.577271 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:12:08.586296 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.586396 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-f6dtz
	I0318 13:12:08.586404 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.586413 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.586422 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.590423 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:08.591241 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:08.591262 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.591272 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.591275 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.595365 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.595905 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:08.595930 1085975 pod_ready.go:81] duration metric: took 9.60406ms for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.595943 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.596031 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pbr9j
	I0318 13:12:08.596042 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.596053 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.596061 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.600342 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.600929 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:08.600947 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.600954 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.600957 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.604171 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:08.604882 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:08.604900 1085975 pod_ready.go:81] duration metric: took 8.948996ms for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.604909 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.604970 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957
	I0318 13:12:08.604980 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.604987 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.604990 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.608453 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:08.609532 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:08.609552 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.609562 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.609568 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.616023 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:12:08.616522 1085975 pod_ready.go:92] pod "etcd-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:08.616543 1085975 pod_ready.go:81] duration metric: took 11.628043ms for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.616553 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.616608 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:08.616616 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.616623 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.616628 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.619449 1085975 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:12:08.620219 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:08.620236 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.620245 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.620254 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.624122 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:09.117259 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:09.117286 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.117294 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.117299 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.121328 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:09.122550 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:09.122575 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.122587 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.122592 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.126093 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:09.617603 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:09.617628 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.617636 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.617639 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.622168 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:09.622818 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:09.622833 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.622842 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.622846 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.626659 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:10.116778 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:10.116806 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.116815 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.116819 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.121067 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:10.121781 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:10.121800 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.121809 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.121813 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.125700 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:10.617195 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:10.617222 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.617230 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.617234 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.621328 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:10.622361 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:10.622379 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.622387 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.622390 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.626338 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:10.627104 1085975 pod_ready.go:102] pod "etcd-ha-942957-m02" in "kube-system" namespace has status "Ready":"False"
	I0318 13:12:11.117155 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:11.117185 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.117198 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.117204 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.121032 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:11.121754 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:11.121781 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.121792 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.121796 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.125242 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:11.617831 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:11.617863 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.617872 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.617878 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.622109 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:11.622796 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:11.622814 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.622822 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.622826 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.626557 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.117269 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:12.117307 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.117318 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.117324 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.121557 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.122508 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.122528 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.122538 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.122543 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.126397 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.127280 1085975 pod_ready.go:92] pod "etcd-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.127310 1085975 pod_ready.go:81] duration metric: took 3.510749299s for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.127332 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.127414 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957
	I0318 13:12:12.127426 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.127435 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.127439 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.131271 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.131953 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:12.131971 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.131978 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.131983 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.135022 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.135593 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.135618 1085975 pod_ready.go:81] duration metric: took 8.278692ms for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.135628 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.135693 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m02
	I0318 13:12:12.135701 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.135708 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.135712 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.138941 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.165000 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.165028 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.165039 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.165045 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.169099 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.169620 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.169648 1085975 pod_ready.go:81] duration metric: took 34.012245ms for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.169660 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.365192 1085975 request.go:629] Waited for 195.414508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:12:12.365279 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:12:12.365287 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.365297 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.365308 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.369036 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.565315 1085975 request.go:629] Waited for 195.410515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:12.565400 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:12.565406 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.565414 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.565419 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.569346 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.570230 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.570250 1085975 pod_ready.go:81] duration metric: took 400.582661ms for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.570262 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.765397 1085975 request.go:629] Waited for 195.030021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:12:12.765517 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:12:12.765531 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.765542 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.765553 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.769705 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.964743 1085975 request.go:629] Waited for 194.327407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.964831 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.964837 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.964845 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.964854 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.968992 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.970137 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.970163 1085975 pod_ready.go:81] duration metric: took 399.894488ms for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.970175 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.165361 1085975 request.go:629] Waited for 195.053042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:12:13.165480 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:12:13.165494 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.165506 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.165518 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.169678 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:13.364719 1085975 request.go:629] Waited for 194.292818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:13.364793 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:13.364799 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.364806 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.364811 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.368495 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:13.369594 1085975 pod_ready.go:92] pod "kube-proxy-97vsd" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:13.369622 1085975 pod_ready.go:81] duration metric: took 399.430259ms for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.369636 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.565618 1085975 request.go:629] Waited for 195.883659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:12:13.565721 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:12:13.565733 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.565744 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.565751 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.569905 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:13.765023 1085975 request.go:629] Waited for 194.327941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:13.765105 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:13.765116 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.765127 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.765135 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.770162 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:13.770945 1085975 pod_ready.go:92] pod "kube-proxy-vjmnr" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:13.770968 1085975 pod_ready.go:81] duration metric: took 401.309863ms for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.770981 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.965017 1085975 request.go:629] Waited for 193.951484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:12:13.965120 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:12:13.965130 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.965139 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.965148 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.970848 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:14.164866 1085975 request.go:629] Waited for 192.304183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:14.164954 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:14.164962 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.164970 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.164981 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.170090 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:14.170594 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:14.170618 1085975 pod_ready.go:81] duration metric: took 399.629246ms for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:14.170627 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:14.364647 1085975 request.go:629] Waited for 193.89019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:12:14.364750 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:12:14.364757 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.364779 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.364787 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.368979 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:14.565100 1085975 request.go:629] Waited for 195.491375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:14.565185 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:14.565193 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.565230 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.565240 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.568977 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:14.569428 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:14.569449 1085975 pod_ready.go:81] duration metric: took 398.814314ms for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:14.569465 1085975 pod_ready.go:38] duration metric: took 5.998795055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:12:14.569487 1085975 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:12:14.569553 1085975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:12:14.585465 1085975 api_server.go:72] duration metric: took 12.244501387s to wait for apiserver process to appear ...
	I0318 13:12:14.585496 1085975 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:12:14.585519 1085975 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0318 13:12:14.592581 1085975 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0318 13:12:14.592670 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0318 13:12:14.592679 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.592688 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.592691 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.594047 1085975 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:12:14.594177 1085975 api_server.go:141] control plane version: v1.28.4
	I0318 13:12:14.594197 1085975 api_server.go:131] duration metric: took 8.694439ms to wait for apiserver health ...
	I0318 13:12:14.594206 1085975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:12:14.765641 1085975 request.go:629] Waited for 171.352888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:14.765739 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:14.765745 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.765753 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.765758 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.771766 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:14.778312 1085975 system_pods.go:59] 17 kube-system pods found
	I0318 13:12:14.778347 1085975 system_pods.go:61] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:12:14.778352 1085975 system_pods.go:61] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:12:14.778356 1085975 system_pods.go:61] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:12:14.778359 1085975 system_pods.go:61] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:12:14.778362 1085975 system_pods.go:61] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:12:14.778365 1085975 system_pods.go:61] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:12:14.778368 1085975 system_pods.go:61] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:12:14.778371 1085975 system_pods.go:61] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:12:14.778374 1085975 system_pods.go:61] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:12:14.778377 1085975 system_pods.go:61] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:12:14.778380 1085975 system_pods.go:61] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:12:14.778383 1085975 system_pods.go:61] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:12:14.778387 1085975 system_pods.go:61] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:12:14.778392 1085975 system_pods.go:61] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:12:14.778396 1085975 system_pods.go:61] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:12:14.778401 1085975 system_pods.go:61] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:12:14.778405 1085975 system_pods.go:61] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:12:14.778412 1085975 system_pods.go:74] duration metric: took 184.198851ms to wait for pod list to return data ...
	I0318 13:12:14.778423 1085975 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:12:14.964815 1085975 request.go:629] Waited for 186.305806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:12:14.964937 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:12:14.964949 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.964960 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.964969 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.969113 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:14.969358 1085975 default_sa.go:45] found service account: "default"
	I0318 13:12:14.969375 1085975 default_sa.go:55] duration metric: took 190.945537ms for default service account to be created ...
	I0318 13:12:14.969385 1085975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:12:15.164764 1085975 request.go:629] Waited for 195.302748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:15.164836 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:15.164846 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:15.164857 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:15.164865 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:15.176701 1085975 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:12:15.181088 1085975 system_pods.go:86] 17 kube-system pods found
	I0318 13:12:15.181122 1085975 system_pods.go:89] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:12:15.181128 1085975 system_pods.go:89] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:12:15.181132 1085975 system_pods.go:89] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:12:15.181137 1085975 system_pods.go:89] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:12:15.181141 1085975 system_pods.go:89] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:12:15.181144 1085975 system_pods.go:89] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:12:15.181148 1085975 system_pods.go:89] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:12:15.181152 1085975 system_pods.go:89] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:12:15.181156 1085975 system_pods.go:89] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:12:15.181160 1085975 system_pods.go:89] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:12:15.181164 1085975 system_pods.go:89] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:12:15.181168 1085975 system_pods.go:89] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:12:15.181173 1085975 system_pods.go:89] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:12:15.181179 1085975 system_pods.go:89] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:12:15.181185 1085975 system_pods.go:89] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:12:15.181190 1085975 system_pods.go:89] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:12:15.181203 1085975 system_pods.go:89] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:12:15.181218 1085975 system_pods.go:126] duration metric: took 211.825119ms to wait for k8s-apps to be running ...
	I0318 13:12:15.181227 1085975 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:12:15.181292 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:12:15.200909 1085975 system_svc.go:56] duration metric: took 19.671034ms WaitForService to wait for kubelet
	I0318 13:12:15.200945 1085975 kubeadm.go:576] duration metric: took 12.859991161s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:12:15.200967 1085975 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:12:15.364750 1085975 request.go:629] Waited for 163.690957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0318 13:12:15.364843 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0318 13:12:15.364853 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:15.364860 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:15.364865 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:15.369136 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:15.370044 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:12:15.370072 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:12:15.370123 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:12:15.370128 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:12:15.370133 1085975 node_conditions.go:105] duration metric: took 169.161669ms to run NodePressure ...
	I0318 13:12:15.370148 1085975 start.go:240] waiting for startup goroutines ...
	I0318 13:12:15.370186 1085975 start.go:254] writing updated cluster config ...
	I0318 13:12:15.372826 1085975 out.go:177] 
	I0318 13:12:15.374338 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:15.374436 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:12:15.376117 1085975 out.go:177] * Starting "ha-942957-m03" control-plane node in "ha-942957" cluster
	I0318 13:12:15.377275 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:12:15.377299 1085975 cache.go:56] Caching tarball of preloaded images
	I0318 13:12:15.377441 1085975 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:12:15.377458 1085975 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:12:15.377602 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:12:15.377814 1085975 start.go:360] acquireMachinesLock for ha-942957-m03: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:12:15.377864 1085975 start.go:364] duration metric: took 27.524µs to acquireMachinesLock for "ha-942957-m03"
	I0318 13:12:15.377885 1085975 start.go:93] Provisioning new machine with config: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:12:15.378046 1085975 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0318 13:12:15.379855 1085975 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:12:15.379949 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:15.379990 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:15.395657 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I0318 13:12:15.396172 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:15.396719 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:15.396767 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:15.397203 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:15.397479 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:15.397631 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:15.397839 1085975 start.go:159] libmachine.API.Create for "ha-942957" (driver="kvm2")
	I0318 13:12:15.397881 1085975 client.go:168] LocalClient.Create starting
	I0318 13:12:15.397922 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 13:12:15.397974 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:12:15.397995 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:12:15.398101 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 13:12:15.398132 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:12:15.398149 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:12:15.398176 1085975 main.go:141] libmachine: Running pre-create checks...
	I0318 13:12:15.398188 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .PreCreateCheck
	I0318 13:12:15.398386 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetConfigRaw
	I0318 13:12:15.398904 1085975 main.go:141] libmachine: Creating machine...
	I0318 13:12:15.398923 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .Create
	I0318 13:12:15.399093 1085975 main.go:141] libmachine: (ha-942957-m03) Creating KVM machine...
	I0318 13:12:15.400488 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found existing default KVM network
	I0318 13:12:15.400628 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found existing private KVM network mk-ha-942957
	I0318 13:12:15.400841 1085975 main.go:141] libmachine: (ha-942957-m03) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03 ...
	I0318 13:12:15.400865 1085975 main.go:141] libmachine: (ha-942957-m03) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:12:15.403983 1085975 main.go:141] libmachine: (ha-942957-m03) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:12:15.404022 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.400817 1086668 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:12:15.659790 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.659650 1086668 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa...
	I0318 13:12:15.863819 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.863658 1086668 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/ha-942957-m03.rawdisk...
	I0318 13:12:15.863879 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Writing magic tar header
	I0318 13:12:15.863891 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Writing SSH key tar header
	I0318 13:12:15.863900 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.863777 1086668 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03 ...
	I0318 13:12:15.863932 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03
	I0318 13:12:15.863954 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03 (perms=drwx------)
	I0318 13:12:15.863964 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 13:12:15.864046 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:12:15.864074 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:12:15.864086 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 13:12:15.864107 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:12:15.864120 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:12:15.864137 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home
	I0318 13:12:15.864149 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Skipping /home - not owner
	I0318 13:12:15.864188 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 13:12:15.864217 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 13:12:15.864235 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:12:15.864248 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:12:15.864263 1085975 main.go:141] libmachine: (ha-942957-m03) Creating domain...
	I0318 13:12:15.865164 1085975 main.go:141] libmachine: (ha-942957-m03) define libvirt domain using xml: 
	I0318 13:12:15.865184 1085975 main.go:141] libmachine: (ha-942957-m03) <domain type='kvm'>
	I0318 13:12:15.865194 1085975 main.go:141] libmachine: (ha-942957-m03)   <name>ha-942957-m03</name>
	I0318 13:12:15.865206 1085975 main.go:141] libmachine: (ha-942957-m03)   <memory unit='MiB'>2200</memory>
	I0318 13:12:15.865215 1085975 main.go:141] libmachine: (ha-942957-m03)   <vcpu>2</vcpu>
	I0318 13:12:15.865224 1085975 main.go:141] libmachine: (ha-942957-m03)   <features>
	I0318 13:12:15.865233 1085975 main.go:141] libmachine: (ha-942957-m03)     <acpi/>
	I0318 13:12:15.865243 1085975 main.go:141] libmachine: (ha-942957-m03)     <apic/>
	I0318 13:12:15.865251 1085975 main.go:141] libmachine: (ha-942957-m03)     <pae/>
	I0318 13:12:15.865260 1085975 main.go:141] libmachine: (ha-942957-m03)     
	I0318 13:12:15.865267 1085975 main.go:141] libmachine: (ha-942957-m03)   </features>
	I0318 13:12:15.865278 1085975 main.go:141] libmachine: (ha-942957-m03)   <cpu mode='host-passthrough'>
	I0318 13:12:15.865288 1085975 main.go:141] libmachine: (ha-942957-m03)   
	I0318 13:12:15.865294 1085975 main.go:141] libmachine: (ha-942957-m03)   </cpu>
	I0318 13:12:15.865303 1085975 main.go:141] libmachine: (ha-942957-m03)   <os>
	I0318 13:12:15.865310 1085975 main.go:141] libmachine: (ha-942957-m03)     <type>hvm</type>
	I0318 13:12:15.865322 1085975 main.go:141] libmachine: (ha-942957-m03)     <boot dev='cdrom'/>
	I0318 13:12:15.865330 1085975 main.go:141] libmachine: (ha-942957-m03)     <boot dev='hd'/>
	I0318 13:12:15.865339 1085975 main.go:141] libmachine: (ha-942957-m03)     <bootmenu enable='no'/>
	I0318 13:12:15.865349 1085975 main.go:141] libmachine: (ha-942957-m03)   </os>
	I0318 13:12:15.865358 1085975 main.go:141] libmachine: (ha-942957-m03)   <devices>
	I0318 13:12:15.865369 1085975 main.go:141] libmachine: (ha-942957-m03)     <disk type='file' device='cdrom'>
	I0318 13:12:15.865386 1085975 main.go:141] libmachine: (ha-942957-m03)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/boot2docker.iso'/>
	I0318 13:12:15.865397 1085975 main.go:141] libmachine: (ha-942957-m03)       <target dev='hdc' bus='scsi'/>
	I0318 13:12:15.865405 1085975 main.go:141] libmachine: (ha-942957-m03)       <readonly/>
	I0318 13:12:15.865414 1085975 main.go:141] libmachine: (ha-942957-m03)     </disk>
	I0318 13:12:15.865424 1085975 main.go:141] libmachine: (ha-942957-m03)     <disk type='file' device='disk'>
	I0318 13:12:15.865436 1085975 main.go:141] libmachine: (ha-942957-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:12:15.865449 1085975 main.go:141] libmachine: (ha-942957-m03)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/ha-942957-m03.rawdisk'/>
	I0318 13:12:15.865460 1085975 main.go:141] libmachine: (ha-942957-m03)       <target dev='hda' bus='virtio'/>
	I0318 13:12:15.865486 1085975 main.go:141] libmachine: (ha-942957-m03)     </disk>
	I0318 13:12:15.865498 1085975 main.go:141] libmachine: (ha-942957-m03)     <interface type='network'>
	I0318 13:12:15.865508 1085975 main.go:141] libmachine: (ha-942957-m03)       <source network='mk-ha-942957'/>
	I0318 13:12:15.865515 1085975 main.go:141] libmachine: (ha-942957-m03)       <model type='virtio'/>
	I0318 13:12:15.865527 1085975 main.go:141] libmachine: (ha-942957-m03)     </interface>
	I0318 13:12:15.865539 1085975 main.go:141] libmachine: (ha-942957-m03)     <interface type='network'>
	I0318 13:12:15.865551 1085975 main.go:141] libmachine: (ha-942957-m03)       <source network='default'/>
	I0318 13:12:15.865561 1085975 main.go:141] libmachine: (ha-942957-m03)       <model type='virtio'/>
	I0318 13:12:15.865574 1085975 main.go:141] libmachine: (ha-942957-m03)     </interface>
	I0318 13:12:15.865584 1085975 main.go:141] libmachine: (ha-942957-m03)     <serial type='pty'>
	I0318 13:12:15.865596 1085975 main.go:141] libmachine: (ha-942957-m03)       <target port='0'/>
	I0318 13:12:15.865606 1085975 main.go:141] libmachine: (ha-942957-m03)     </serial>
	I0318 13:12:15.865615 1085975 main.go:141] libmachine: (ha-942957-m03)     <console type='pty'>
	I0318 13:12:15.865626 1085975 main.go:141] libmachine: (ha-942957-m03)       <target type='serial' port='0'/>
	I0318 13:12:15.865638 1085975 main.go:141] libmachine: (ha-942957-m03)     </console>
	I0318 13:12:15.865648 1085975 main.go:141] libmachine: (ha-942957-m03)     <rng model='virtio'>
	I0318 13:12:15.865660 1085975 main.go:141] libmachine: (ha-942957-m03)       <backend model='random'>/dev/random</backend>
	I0318 13:12:15.865666 1085975 main.go:141] libmachine: (ha-942957-m03)     </rng>
	I0318 13:12:15.865677 1085975 main.go:141] libmachine: (ha-942957-m03)     
	I0318 13:12:15.865686 1085975 main.go:141] libmachine: (ha-942957-m03)     
	I0318 13:12:15.865694 1085975 main.go:141] libmachine: (ha-942957-m03)   </devices>
	I0318 13:12:15.865703 1085975 main.go:141] libmachine: (ha-942957-m03) </domain>
	I0318 13:12:15.865714 1085975 main.go:141] libmachine: (ha-942957-m03) 
	I0318 13:12:15.873143 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:c3:8a:cc in network default
	I0318 13:12:15.873857 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:15.873896 1085975 main.go:141] libmachine: (ha-942957-m03) Ensuring networks are active...
	I0318 13:12:15.874702 1085975 main.go:141] libmachine: (ha-942957-m03) Ensuring network default is active
	I0318 13:12:15.875110 1085975 main.go:141] libmachine: (ha-942957-m03) Ensuring network mk-ha-942957 is active
	I0318 13:12:15.875537 1085975 main.go:141] libmachine: (ha-942957-m03) Getting domain xml...
	I0318 13:12:15.876385 1085975 main.go:141] libmachine: (ha-942957-m03) Creating domain...
	I0318 13:12:17.113074 1085975 main.go:141] libmachine: (ha-942957-m03) Waiting to get IP...
	I0318 13:12:17.113884 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:17.114363 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:17.114412 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:17.114345 1086668 retry.go:31] will retry after 201.949613ms: waiting for machine to come up
	I0318 13:12:17.317842 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:17.318361 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:17.318386 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:17.318315 1086668 retry.go:31] will retry after 361.088581ms: waiting for machine to come up
	I0318 13:12:17.681105 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:17.681546 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:17.681582 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:17.681503 1086668 retry.go:31] will retry after 417.612899ms: waiting for machine to come up
	I0318 13:12:18.101244 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:18.101743 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:18.101768 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:18.101706 1086668 retry.go:31] will retry after 398.155429ms: waiting for machine to come up
	I0318 13:12:18.502103 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:18.502489 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:18.502519 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:18.502464 1086668 retry.go:31] will retry after 604.308205ms: waiting for machine to come up
	I0318 13:12:19.108316 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:19.108744 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:19.108775 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:19.108697 1086668 retry.go:31] will retry after 891.677543ms: waiting for machine to come up
	I0318 13:12:20.002548 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:20.003175 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:20.003210 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:20.003106 1086668 retry.go:31] will retry after 1.001185435s: waiting for machine to come up
	I0318 13:12:21.006470 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:21.006985 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:21.007014 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:21.006947 1086668 retry.go:31] will retry after 987.859668ms: waiting for machine to come up
	I0318 13:12:21.996407 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:21.996997 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:21.997020 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:21.996948 1086668 retry.go:31] will retry after 1.431664028s: waiting for machine to come up
	I0318 13:12:23.430602 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:23.431081 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:23.431108 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:23.431025 1086668 retry.go:31] will retry after 1.676487591s: waiting for machine to come up
	I0318 13:12:25.109912 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:25.110380 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:25.110411 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:25.110339 1086668 retry.go:31] will retry after 2.714530325s: waiting for machine to come up
	I0318 13:12:27.827207 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:27.827685 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:27.827714 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:27.827635 1086668 retry.go:31] will retry after 2.457496431s: waiting for machine to come up
	I0318 13:12:30.287007 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:30.287471 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:30.287544 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:30.287466 1086668 retry.go:31] will retry after 2.869948309s: waiting for machine to come up
	I0318 13:12:33.160830 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:33.161298 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:33.161323 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:33.161247 1086668 retry.go:31] will retry after 3.782381909s: waiting for machine to come up
	I0318 13:12:36.944857 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:36.945373 1085975 main.go:141] libmachine: (ha-942957-m03) Found IP for machine: 192.168.39.135
	I0318 13:12:36.945394 1085975 main.go:141] libmachine: (ha-942957-m03) Reserving static IP address...
	I0318 13:12:36.945404 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has current primary IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:36.945940 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find host DHCP lease matching {name: "ha-942957-m03", mac: "52:54:00:60:e8:43", ip: "192.168.39.135"} in network mk-ha-942957
	I0318 13:12:37.029672 1085975 main.go:141] libmachine: (ha-942957-m03) Reserved static IP address: 192.168.39.135
	I0318 13:12:37.029712 1085975 main.go:141] libmachine: (ha-942957-m03) Waiting for SSH to be available...
	I0318 13:12:37.029723 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Getting to WaitForSSH function...
	I0318 13:12:37.032526 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.032970 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.033008 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.033160 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Using SSH client type: external
	I0318 13:12:37.033193 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa (-rw-------)
	I0318 13:12:37.033223 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:12:37.033238 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | About to run SSH command:
	I0318 13:12:37.033251 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | exit 0
	I0318 13:12:37.156390 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | SSH cmd err, output: <nil>: 
	I0318 13:12:37.156715 1085975 main.go:141] libmachine: (ha-942957-m03) KVM machine creation complete!
	I0318 13:12:37.157048 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetConfigRaw
	I0318 13:12:37.157637 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:37.157871 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:37.158090 1085975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:12:37.158108 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:12:37.159348 1085975 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:12:37.159367 1085975 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:12:37.159376 1085975 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:12:37.159385 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.162153 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.162571 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.162598 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.162723 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.162909 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.163056 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.163185 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.163362 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.163643 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.163659 1085975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:12:37.271746 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:12:37.271779 1085975 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:12:37.271792 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.274733 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.275180 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.275211 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.275380 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.275607 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.275820 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.276004 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.276224 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.276414 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.276428 1085975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:12:37.381137 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:12:37.381244 1085975 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:12:37.381254 1085975 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:12:37.381261 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:37.381553 1085975 buildroot.go:166] provisioning hostname "ha-942957-m03"
	I0318 13:12:37.381591 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:37.381840 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.384755 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.385171 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.385203 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.385390 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.385598 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.385784 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.385958 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.386147 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.386343 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.386359 1085975 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957-m03 && echo "ha-942957-m03" | sudo tee /etc/hostname
	I0318 13:12:37.510570 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957-m03
	
	I0318 13:12:37.510616 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.513983 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.514356 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.514397 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.514658 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.514877 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.515089 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.515277 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.515444 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.515613 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.515630 1085975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:12:37.635916 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:12:37.635951 1085975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:12:37.635977 1085975 buildroot.go:174] setting up certificates
	I0318 13:12:37.635993 1085975 provision.go:84] configureAuth start
	I0318 13:12:37.636010 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:37.636367 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:37.639710 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.640162 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.640196 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.640427 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.643111 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.643538 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.643567 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.643861 1085975 provision.go:143] copyHostCerts
	I0318 13:12:37.643902 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:12:37.643941 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:12:37.643955 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:12:37.644042 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:12:37.644145 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:12:37.644171 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:12:37.644178 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:12:37.644217 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:12:37.644278 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:12:37.644306 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:12:37.644315 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:12:37.644348 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:12:37.644416 1085975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957-m03 san=[127.0.0.1 192.168.39.135 ha-942957-m03 localhost minikube]
	I0318 13:12:38.043304 1085975 provision.go:177] copyRemoteCerts
	I0318 13:12:38.043383 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:12:38.043421 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.046406 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.046708 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.046738 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.046959 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.047213 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.047388 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.047567 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:38.130688 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:12:38.130800 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:12:38.160923 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:12:38.161016 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 13:12:38.191115 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:12:38.191210 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:12:38.219429 1085975 provision.go:87] duration metric: took 583.414938ms to configureAuth
	I0318 13:12:38.219470 1085975 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:12:38.219740 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:38.219912 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.222976 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.223443 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.223469 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.223721 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.223980 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.224165 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.224311 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.224514 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:38.224693 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:38.224707 1085975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:12:38.522199 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:12:38.522243 1085975 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:12:38.522256 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetURL
	I0318 13:12:38.524076 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Using libvirt version 6000000
	I0318 13:12:38.526778 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.527217 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.527253 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.527469 1085975 main.go:141] libmachine: Docker is up and running!
	I0318 13:12:38.527492 1085975 main.go:141] libmachine: Reticulating splines...
	I0318 13:12:38.527501 1085975 client.go:171] duration metric: took 23.129609775s to LocalClient.Create
	I0318 13:12:38.527527 1085975 start.go:167] duration metric: took 23.129689972s to libmachine.API.Create "ha-942957"
	I0318 13:12:38.527545 1085975 start.go:293] postStartSetup for "ha-942957-m03" (driver="kvm2")
	I0318 13:12:38.527562 1085975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:12:38.527587 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.527885 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:12:38.527922 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.530278 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.530649 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.530675 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.530858 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.531033 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.531251 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.531409 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:38.616038 1085975 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:12:38.620973 1085975 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:12:38.621012 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:12:38.621096 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:12:38.621185 1085975 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:12:38.621197 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:12:38.621290 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:12:38.632111 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:12:38.659511 1085975 start.go:296] duration metric: took 131.944258ms for postStartSetup
	I0318 13:12:38.659586 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetConfigRaw
	I0318 13:12:38.660327 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:38.663448 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.663820 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.663892 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.664232 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:12:38.664468 1085975 start.go:128] duration metric: took 23.286407971s to createHost
	I0318 13:12:38.664498 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.667126 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.667481 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.667504 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.667636 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.667871 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.668050 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.668211 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.668384 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:38.668578 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:38.668591 1085975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:12:38.773377 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767558.753487949
	
	I0318 13:12:38.773407 1085975 fix.go:216] guest clock: 1710767558.753487949
	I0318 13:12:38.773423 1085975 fix.go:229] Guest: 2024-03-18 13:12:38.753487949 +0000 UTC Remote: 2024-03-18 13:12:38.664483321 +0000 UTC m=+167.123361983 (delta=89.004628ms)
	I0318 13:12:38.773447 1085975 fix.go:200] guest clock delta is within tolerance: 89.004628ms
	I0318 13:12:38.773454 1085975 start.go:83] releasing machines lock for "ha-942957-m03", held for 23.395577494s
	I0318 13:12:38.773480 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.773770 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:38.776659 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.777091 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.777124 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.779480 1085975 out.go:177] * Found network options:
	I0318 13:12:38.781030 1085975 out.go:177]   - NO_PROXY=192.168.39.68,192.168.39.22
	W0318 13:12:38.782426 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 13:12:38.782453 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:38.782479 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.783158 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.783397 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.783534 1085975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:12:38.783579 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	W0318 13:12:38.783623 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 13:12:38.783652 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:38.783732 1085975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:12:38.783759 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.786716 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.786841 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.787131 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.787158 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.787187 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.787233 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.787295 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.787518 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.787521 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.787708 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.787715 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.787861 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.787856 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:38.788045 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:39.025721 1085975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:12:39.033191 1085975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:12:39.033276 1085975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:12:39.052390 1085975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:12:39.052432 1085975 start.go:494] detecting cgroup driver to use...
	I0318 13:12:39.052548 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:12:39.069919 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:12:39.084577 1085975 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:12:39.084659 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:12:39.099238 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:12:39.113766 1085975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:12:39.243070 1085975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:12:39.408921 1085975 docker.go:233] disabling docker service ...
	I0318 13:12:39.409020 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:12:39.425742 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:12:39.440652 1085975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:12:39.579646 1085975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:12:39.707442 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:12:39.722635 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:12:39.742783 1085975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:12:39.742855 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.753860 1085975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:12:39.753947 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.764521 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.775149 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.786262 1085975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:12:39.798772 1085975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:12:39.810435 1085975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:12:39.810507 1085975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:12:39.824792 1085975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:12:39.836435 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:39.963591 1085975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:12:40.111783 1085975 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:12:40.111881 1085975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:12:40.117244 1085975 start.go:562] Will wait 60s for crictl version
	I0318 13:12:40.117314 1085975 ssh_runner.go:195] Run: which crictl
	I0318 13:12:40.122337 1085975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:12:40.168164 1085975 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:12:40.168269 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:12:40.198928 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:12:40.232691 1085975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:12:40.234577 1085975 out.go:177]   - env NO_PROXY=192.168.39.68
	I0318 13:12:40.236099 1085975 out.go:177]   - env NO_PROXY=192.168.39.68,192.168.39.22
	I0318 13:12:40.237376 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:40.240527 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:40.240941 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:40.240971 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:40.241180 1085975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:12:40.246582 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:40.260287 1085975 mustload.go:65] Loading cluster: ha-942957
	I0318 13:12:40.260681 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:40.261094 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:40.261153 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:40.277488 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0318 13:12:40.277970 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:40.278474 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:40.278498 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:40.278874 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:40.279115 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:12:40.280672 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:12:40.280962 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:40.280996 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:40.295913 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0318 13:12:40.296377 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:40.296927 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:40.296959 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:40.297315 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:40.297534 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:12:40.297752 1085975 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.135
	I0318 13:12:40.297765 1085975 certs.go:194] generating shared ca certs ...
	I0318 13:12:40.297781 1085975 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:40.297917 1085975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:12:40.297952 1085975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:12:40.297961 1085975 certs.go:256] generating profile certs ...
	I0318 13:12:40.298048 1085975 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:12:40.298073 1085975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577
	I0318 13:12:40.298089 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.22 192.168.39.135 192.168.39.254]
	I0318 13:12:40.422797 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577 ...
	I0318 13:12:40.422839 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577: {Name:mk8f2c47f91c4ca227df518f1be79da263f9ffc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:40.423049 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577 ...
	I0318 13:12:40.423065 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577: {Name:mkfb54bc97c141343d32974fffccba1d6d1decf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:40.423167 1085975 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:12:40.423300 1085975 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:12:40.423429 1085975 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:12:40.423448 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:12:40.423461 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:12:40.423474 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:12:40.423486 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:12:40.423499 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:12:40.423510 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:12:40.423521 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:12:40.423534 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:12:40.423585 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:12:40.423616 1085975 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:12:40.423626 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:12:40.423646 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:12:40.423674 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:12:40.423705 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:12:40.423766 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:12:40.423808 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:12:40.423853 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:40.423873 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:12:40.423920 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:12:40.427041 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:40.427460 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:12:40.427491 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:40.427711 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:12:40.427993 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:12:40.428168 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:12:40.428296 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:12:40.508173 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 13:12:40.514284 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 13:12:40.526841 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 13:12:40.531667 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 13:12:40.543814 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 13:12:40.548746 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 13:12:40.563064 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 13:12:40.567906 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 13:12:40.584407 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 13:12:40.589804 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 13:12:40.603753 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 13:12:40.608351 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 13:12:40.621966 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:12:40.653415 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:12:40.682724 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:12:40.712195 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:12:40.740664 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 13:12:40.768850 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:12:40.797819 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:12:40.825606 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:12:40.856382 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:12:40.885403 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:12:40.914530 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:12:40.945351 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 13:12:40.965155 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 13:12:40.984722 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 13:12:41.004718 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 13:12:41.025176 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 13:12:41.043504 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 13:12:41.062091 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 13:12:41.082769 1085975 ssh_runner.go:195] Run: openssl version
	I0318 13:12:41.089387 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:12:41.101714 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:12:41.106806 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:12:41.106888 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:12:41.113364 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:12:41.125670 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:12:41.137584 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:12:41.142707 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:12:41.142783 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:12:41.149154 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:12:41.160712 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:12:41.173476 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:41.179302 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:41.179395 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:41.186253 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:12:41.198808 1085975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:12:41.203660 1085975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:12:41.203745 1085975 kubeadm.go:928] updating node {m03 192.168.39.135 8443 v1.28.4 crio true true} ...
	I0318 13:12:41.203911 1085975 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:12:41.203947 1085975 kube-vip.go:111] generating kube-vip config ...
	I0318 13:12:41.204009 1085975 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:12:41.224104 1085975 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:12:41.224196 1085975 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:12:41.224264 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:12:41.236822 1085975 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 13:12:41.236908 1085975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 13:12:41.248647 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 13:12:41.248686 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 13:12:41.248705 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:12:41.248715 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:12:41.248647 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 13:12:41.248799 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:12:41.248807 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:12:41.248911 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:12:41.264835 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:12:41.264861 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 13:12:41.264898 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 13:12:41.264925 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 13:12:41.264946 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:12:41.264959 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 13:12:41.274291 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 13:12:41.274329 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 13:12:42.353556 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 13:12:42.364551 1085975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 13:12:42.385348 1085975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:12:42.405052 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:12:42.425540 1085975 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:12:42.429995 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:42.443611 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:42.582612 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:12:42.599893 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:12:42.600249 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:42.600305 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:42.618027 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I0318 13:12:42.618630 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:42.619404 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:42.619457 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:42.619896 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:42.620130 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:12:42.620347 1085975 start.go:316] joinCluster: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:12:42.620583 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 13:12:42.620609 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:12:42.624284 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:42.625006 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:12:42.625043 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:42.625200 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:12:42.625418 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:12:42.625651 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:12:42.625859 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:12:42.807169 1085975 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:12:42.807241 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3x4ayw.ucvvy5mdkat71a27 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m03 --control-plane --apiserver-advertise-address=192.168.39.135 --apiserver-bind-port=8443"
	I0318 13:13:10.159544 1085975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3x4ayw.ucvvy5mdkat71a27 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m03 --control-plane --apiserver-advertise-address=192.168.39.135 --apiserver-bind-port=8443": (27.352266765s)
	I0318 13:13:10.159592 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 13:13:10.604945 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-942957-m03 minikube.k8s.io/updated_at=2024_03_18T13_13_10_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=ha-942957 minikube.k8s.io/primary=false
	I0318 13:13:10.741208 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-942957-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 13:13:10.900260 1085975 start.go:318] duration metric: took 28.279905586s to joinCluster
	I0318 13:13:10.900369 1085975 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:13:10.902102 1085975 out.go:177] * Verifying Kubernetes components...
	I0318 13:13:10.900822 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:13:10.903490 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:13:11.138942 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:13:11.155205 1085975 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:13:11.155607 1085975 kapi.go:59] client config for ha-942957: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 13:13:11.155726 1085975 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0318 13:13:11.156059 1085975 node_ready.go:35] waiting up to 6m0s for node "ha-942957-m03" to be "Ready" ...
	I0318 13:13:11.156196 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:11.156210 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:11.156221 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:11.156230 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:11.161930 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:11.657266 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:11.657297 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:11.657310 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:11.657315 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:11.661128 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:12.156796 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:12.156830 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:12.156844 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:12.156851 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:12.161145 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:12.656610 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:12.656640 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:12.656649 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:12.656654 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:12.661105 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:13.156693 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:13.156724 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:13.156741 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:13.156748 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:13.160955 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:13.161814 1085975 node_ready.go:53] node "ha-942957-m03" has status "Ready":"False"
	I0318 13:13:13.657166 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:13.657195 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:13.657209 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:13.657215 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:13.661720 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:14.156719 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:14.156743 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:14.156751 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:14.156754 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:14.160867 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:14.656383 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:14.656408 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:14.656417 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:14.656420 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:14.660336 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:15.157169 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:15.157203 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:15.157214 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:15.157221 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:15.161235 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:15.657132 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:15.657167 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:15.657177 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:15.657182 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:15.661801 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:15.662576 1085975 node_ready.go:53] node "ha-942957-m03" has status "Ready":"False"
	I0318 13:13:16.156906 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:16.156933 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:16.156941 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:16.156947 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:16.160966 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:16.657322 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:16.657348 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:16.657357 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:16.657361 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:16.661553 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:17.157004 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:17.157038 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.157047 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.157051 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.168456 1085975 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:13:17.657184 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:17.657212 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.657221 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.657226 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.662171 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:17.663898 1085975 node_ready.go:49] node "ha-942957-m03" has status "Ready":"True"
	I0318 13:13:17.663922 1085975 node_ready.go:38] duration metric: took 6.507835476s for node "ha-942957-m03" to be "Ready" ...
	I0318 13:13:17.663936 1085975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:13:17.664021 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:17.664035 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.664049 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.664065 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.675735 1085975 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:13:17.682451 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.682556 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-f6dtz
	I0318 13:13:17.682572 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.682580 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.682584 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.686432 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.687231 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:17.687248 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.687254 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.687257 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.690826 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.691386 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.691406 1085975 pod_ready.go:81] duration metric: took 8.927182ms for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.691416 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.691482 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pbr9j
	I0318 13:13:17.691491 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.691500 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.691506 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.694987 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.695797 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:17.695815 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.695845 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.695853 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.699574 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.700249 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.700273 1085975 pod_ready.go:81] duration metric: took 8.843875ms for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.700289 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.700359 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957
	I0318 13:13:17.700370 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.700382 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.700394 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.705683 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:17.706392 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:17.706414 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.706425 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.706430 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.711865 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:17.712474 1085975 pod_ready.go:92] pod "etcd-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.712502 1085975 pod_ready.go:81] duration metric: took 12.203007ms for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.712515 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.712611 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:13:17.712625 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.712636 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.712642 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.720930 1085975 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:13:17.721464 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:17.721479 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.721486 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.721491 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.726082 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:17.726897 1085975 pod_ready.go:92] pod "etcd-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.726922 1085975 pod_ready.go:81] duration metric: took 14.394384ms for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.726937 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.857286 1085975 request.go:629] Waited for 130.250688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:17.857372 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:17.857384 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.857394 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.857404 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.861861 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:18.057952 1085975 request.go:629] Waited for 195.370725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.058072 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.058084 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.058091 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.058095 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.072442 1085975 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 13:13:18.257653 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:18.257680 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.257689 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.257694 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.261641 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:18.457876 1085975 request.go:629] Waited for 195.042096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.457954 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.457962 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.457972 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.457979 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.461800 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:18.727621 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:18.727653 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.727664 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.727669 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.731404 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:18.857549 1085975 request.go:629] Waited for 125.300467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.857647 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.857657 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.857665 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.857672 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.861525 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:19.227177 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:19.227203 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.227211 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.227216 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.231641 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.257778 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:19.257807 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.257817 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.257822 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.262060 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.262505 1085975 pod_ready.go:92] pod "etcd-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:19.262525 1085975 pod_ready.go:81] duration metric: took 1.53558222s for pod "etcd-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.262542 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.458027 1085975 request.go:629] Waited for 195.382217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957
	I0318 13:13:19.458117 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957
	I0318 13:13:19.458123 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.458131 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.458135 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.462232 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.657717 1085975 request.go:629] Waited for 194.488617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:19.657817 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:19.657823 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.657831 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.657835 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.662050 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.662664 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:19.662688 1085975 pod_ready.go:81] duration metric: took 400.138336ms for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.662706 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.857727 1085975 request.go:629] Waited for 194.92648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m02
	I0318 13:13:19.857808 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m02
	I0318 13:13:19.857813 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.857820 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.857824 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.861878 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.058046 1085975 request.go:629] Waited for 195.187918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:20.058116 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:20.058123 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.058131 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.058135 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.062446 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.063239 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:20.063267 1085975 pod_ready.go:81] duration metric: took 400.548134ms for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:20.063279 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:20.257796 1085975 request.go:629] Waited for 194.399218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.257870 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.257877 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.257887 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.257894 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.262526 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.458238 1085975 request.go:629] Waited for 194.523823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.458323 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.458328 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.458336 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.458340 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.462593 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.657309 1085975 request.go:629] Waited for 93.18291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.657377 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.657403 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.657411 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.657416 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.661819 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.857900 1085975 request.go:629] Waited for 195.374284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.858004 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.858025 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.858036 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.858042 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.861507 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:21.064310 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:21.064336 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.064345 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.064350 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.069152 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:21.257610 1085975 request.go:629] Waited for 187.370183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:21.257725 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:21.257735 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.257744 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.257748 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.262045 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:21.262848 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:21.262870 1085975 pod_ready.go:81] duration metric: took 1.199579409s for pod "kube-apiserver-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.262883 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.457265 1085975 request.go:629] Waited for 194.290228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:13:21.457377 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:13:21.457388 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.457395 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.457398 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.461466 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:21.657513 1085975 request.go:629] Waited for 195.116252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:21.657597 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:21.657605 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.657614 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.657622 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.661548 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:21.662076 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:21.662096 1085975 pod_ready.go:81] duration metric: took 399.205425ms for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.662106 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.857653 1085975 request.go:629] Waited for 195.429972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:13:21.857754 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:13:21.857766 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.857779 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.857788 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.865119 1085975 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:13:22.057808 1085975 request.go:629] Waited for 191.804251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:22.057872 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:22.057877 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.057884 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.057887 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.062028 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:22.062493 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:22.062510 1085975 pod_ready.go:81] duration metric: took 400.398049ms for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.062527 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.257695 1085975 request.go:629] Waited for 195.083612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m03
	I0318 13:13:22.257762 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m03
	I0318 13:13:22.257767 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.257776 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.257783 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.262252 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:22.457700 1085975 request.go:629] Waited for 194.385677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:22.457802 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:22.457808 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.457816 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.457821 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.461509 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:22.462220 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:22.462244 1085975 pod_ready.go:81] duration metric: took 399.706188ms for pod "kube-controller-manager-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.462259 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.657883 1085975 request.go:629] Waited for 195.518599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:13:22.657973 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:13:22.657986 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.657999 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.658011 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.662224 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:22.858251 1085975 request.go:629] Waited for 195.398102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:22.858339 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:22.858352 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.858365 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.858370 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.864419 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:13:22.865369 1085975 pod_ready.go:92] pod "kube-proxy-97vsd" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:22.865391 1085975 pod_ready.go:81] duration metric: took 403.124782ms for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.865402 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxtls" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.057505 1085975 request.go:629] Waited for 191.993257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxtls
	I0318 13:13:23.057594 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxtls
	I0318 13:13:23.057606 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.057620 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.057635 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.063204 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:23.257660 1085975 request.go:629] Waited for 193.399419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:23.257725 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:23.257730 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.257737 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.257741 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.262438 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:23.263534 1085975 pod_ready.go:92] pod "kube-proxy-rxtls" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:23.263557 1085975 pod_ready.go:81] duration metric: took 398.149534ms for pod "kube-proxy-rxtls" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.263568 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.457799 1085975 request.go:629] Waited for 194.091973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:13:23.457901 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:13:23.457914 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.457925 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.457934 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.463169 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:23.658163 1085975 request.go:629] Waited for 194.39344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:23.658288 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:23.658308 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.658318 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.658326 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.663018 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:23.663699 1085975 pod_ready.go:92] pod "kube-proxy-vjmnr" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:23.663722 1085975 pod_ready.go:81] duration metric: took 400.148512ms for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.663732 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.857854 1085975 request.go:629] Waited for 194.051277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:13:23.857945 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:13:23.857951 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.857959 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.857964 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.862153 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:24.057764 1085975 request.go:629] Waited for 194.415091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:24.057874 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:24.057886 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.057904 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.057911 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.061476 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.062523 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:24.062555 1085975 pod_ready.go:81] duration metric: took 398.815162ms for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.062578 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.257289 1085975 request.go:629] Waited for 194.622424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:13:24.257391 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:13:24.257399 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.257413 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.257419 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.261115 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.458272 1085975 request.go:629] Waited for 196.419197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:24.458346 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:24.458353 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.458364 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.458371 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.463579 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:24.464729 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:24.464753 1085975 pod_ready.go:81] duration metric: took 402.166842ms for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.464763 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.657242 1085975 request.go:629] Waited for 192.391629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m03
	I0318 13:13:24.657337 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m03
	I0318 13:13:24.657349 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.657361 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.657372 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.661275 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.858280 1085975 request.go:629] Waited for 196.386855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:24.858944 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:24.859028 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.859050 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.859065 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.864039 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:24.864721 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:24.864747 1085975 pod_ready.go:81] duration metric: took 399.977484ms for pod "kube-scheduler-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.864757 1085975 pod_ready.go:38] duration metric: took 7.200805522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:13:24.864774 1085975 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:13:24.864829 1085975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:13:24.887092 1085975 api_server.go:72] duration metric: took 13.986674275s to wait for apiserver process to appear ...
	I0318 13:13:24.887123 1085975 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:13:24.887149 1085975 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0318 13:13:24.892626 1085975 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0318 13:13:24.892714 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0318 13:13:24.892722 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.892730 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.892736 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.893923 1085975 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:13:24.894008 1085975 api_server.go:141] control plane version: v1.28.4
	I0318 13:13:24.894024 1085975 api_server.go:131] duration metric: took 6.894698ms to wait for apiserver health ...
	I0318 13:13:24.894033 1085975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:13:25.057498 1085975 request.go:629] Waited for 163.36708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.057561 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.057566 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.057573 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.057578 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.064044 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:13:25.070828 1085975 system_pods.go:59] 24 kube-system pods found
	I0318 13:13:25.070866 1085975 system_pods.go:61] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:13:25.070873 1085975 system_pods.go:61] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:13:25.070878 1085975 system_pods.go:61] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:13:25.070883 1085975 system_pods.go:61] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:13:25.070888 1085975 system_pods.go:61] "etcd-ha-942957-m03" [0ad37fbd-7093-465a-a0d2-9ba364ea4600] Running
	I0318 13:13:25.070892 1085975 system_pods.go:61] "kindnet-4rf6r" [619ed2f9-ed21-43ba-988d-e25959f55fcb] Running
	I0318 13:13:25.070898 1085975 system_pods.go:61] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:13:25.070903 1085975 system_pods.go:61] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:13:25.070907 1085975 system_pods.go:61] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:13:25.070912 1085975 system_pods.go:61] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:13:25.070920 1085975 system_pods.go:61] "kube-apiserver-ha-942957-m03" [c62e4f36-881f-4d6e-b81d-28b250bf0fa4] Running
	I0318 13:13:25.070926 1085975 system_pods.go:61] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:13:25.070935 1085975 system_pods.go:61] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:13:25.070940 1085975 system_pods.go:61] "kube-controller-manager-ha-942957-m03" [4c68f3e5-a122-4f2d-8aa5-5fa9ffdf4ac5] Running
	I0318 13:13:25.070946 1085975 system_pods.go:61] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:13:25.070952 1085975 system_pods.go:61] "kube-proxy-rxtls" [0ac91025-af8e-4f13-8f0c-eae1b7f4d046] Running
	I0318 13:13:25.070957 1085975 system_pods.go:61] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:13:25.070963 1085975 system_pods.go:61] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:13:25.070972 1085975 system_pods.go:61] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:13:25.070978 1085975 system_pods.go:61] "kube-scheduler-ha-942957-m03" [f843b5cb-393e-4890-a188-a750c4571f64] Running
	I0318 13:13:25.070986 1085975 system_pods.go:61] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:13:25.070991 1085975 system_pods.go:61] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:13:25.070996 1085975 system_pods.go:61] "kube-vip-ha-942957-m03" [b461a5fd-5899-4d2f-aff4-ebf58a0c1b97] Running
	I0318 13:13:25.071002 1085975 system_pods.go:61] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:13:25.071012 1085975 system_pods.go:74] duration metric: took 176.971252ms to wait for pod list to return data ...
	I0318 13:13:25.071025 1085975 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:13:25.258160 1085975 request.go:629] Waited for 187.038497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:13:25.258245 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:13:25.258252 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.258263 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.258268 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.262178 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:25.262345 1085975 default_sa.go:45] found service account: "default"
	I0318 13:13:25.262363 1085975 default_sa.go:55] duration metric: took 191.328533ms for default service account to be created ...
	I0318 13:13:25.262377 1085975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:13:25.458089 1085975 request.go:629] Waited for 195.609101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.458174 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.458182 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.458192 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.458203 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.466648 1085975 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:13:25.473378 1085975 system_pods.go:86] 24 kube-system pods found
	I0318 13:13:25.473418 1085975 system_pods.go:89] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:13:25.473426 1085975 system_pods.go:89] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:13:25.473434 1085975 system_pods.go:89] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:13:25.473440 1085975 system_pods.go:89] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:13:25.473445 1085975 system_pods.go:89] "etcd-ha-942957-m03" [0ad37fbd-7093-465a-a0d2-9ba364ea4600] Running
	I0318 13:13:25.473450 1085975 system_pods.go:89] "kindnet-4rf6r" [619ed2f9-ed21-43ba-988d-e25959f55fcb] Running
	I0318 13:13:25.473456 1085975 system_pods.go:89] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:13:25.473461 1085975 system_pods.go:89] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:13:25.473467 1085975 system_pods.go:89] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:13:25.473476 1085975 system_pods.go:89] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:13:25.473483 1085975 system_pods.go:89] "kube-apiserver-ha-942957-m03" [c62e4f36-881f-4d6e-b81d-28b250bf0fa4] Running
	I0318 13:13:25.473491 1085975 system_pods.go:89] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:13:25.473502 1085975 system_pods.go:89] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:13:25.473511 1085975 system_pods.go:89] "kube-controller-manager-ha-942957-m03" [4c68f3e5-a122-4f2d-8aa5-5fa9ffdf4ac5] Running
	I0318 13:13:25.473524 1085975 system_pods.go:89] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:13:25.473531 1085975 system_pods.go:89] "kube-proxy-rxtls" [0ac91025-af8e-4f13-8f0c-eae1b7f4d046] Running
	I0318 13:13:25.473537 1085975 system_pods.go:89] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:13:25.473547 1085975 system_pods.go:89] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:13:25.473557 1085975 system_pods.go:89] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:13:25.473572 1085975 system_pods.go:89] "kube-scheduler-ha-942957-m03" [f843b5cb-393e-4890-a188-a750c4571f64] Running
	I0318 13:13:25.473579 1085975 system_pods.go:89] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:13:25.473586 1085975 system_pods.go:89] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:13:25.473592 1085975 system_pods.go:89] "kube-vip-ha-942957-m03" [b461a5fd-5899-4d2f-aff4-ebf58a0c1b97] Running
	I0318 13:13:25.473599 1085975 system_pods.go:89] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:13:25.473610 1085975 system_pods.go:126] duration metric: took 211.224055ms to wait for k8s-apps to be running ...
	I0318 13:13:25.473623 1085975 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:13:25.473697 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:13:25.492440 1085975 system_svc.go:56] duration metric: took 18.802632ms WaitForService to wait for kubelet
	I0318 13:13:25.492475 1085975 kubeadm.go:576] duration metric: took 14.59206562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:13:25.492500 1085975 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:13:25.658012 1085975 request.go:629] Waited for 165.409924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0318 13:13:25.658110 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0318 13:13:25.658118 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.658135 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.658146 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.662240 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:25.663543 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:13:25.663564 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:13:25.663596 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:13:25.663600 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:13:25.663604 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:13:25.663607 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:13:25.663611 1085975 node_conditions.go:105] duration metric: took 171.103188ms to run NodePressure ...
	I0318 13:13:25.663625 1085975 start.go:240] waiting for startup goroutines ...
	I0318 13:13:25.663646 1085975 start.go:254] writing updated cluster config ...
	I0318 13:13:25.663976 1085975 ssh_runner.go:195] Run: rm -f paused
	I0318 13:13:25.722800 1085975 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:13:25.726143 1085975 out.go:177] * Done! kubectl is now configured to use "ha-942957" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.138908383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767819138873506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80e488f0-5465-47b7-8ffa-69089ace5945 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.139741173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4eb9dee-2cda-40f2-b43d-0fa7875939b5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.139817192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4eb9dee-2cda-40f2-b43d-0fa7875939b5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.140400729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4eb9dee-2cda-40f2-b43d-0fa7875939b5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.184233625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2a786b9-8f8c-43fc-afdb-69ef6f8acd86 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.184310475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2a786b9-8f8c-43fc-afdb-69ef6f8acd86 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.185568996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3462421c-c331-4f95-a13b-855b37a1fd71 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.186085918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767819186061648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3462421c-c331-4f95-a13b-855b37a1fd71 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.186849707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d938a956-7328-4f07-9c10-80cdf898dfa4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.186899954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d938a956-7328-4f07-9c10-80cdf898dfa4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.187146822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d938a956-7328-4f07-9c10-80cdf898dfa4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.231933271Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad4ea558-6186-4fee-9340-6b731e004cbc name=/runtime.v1.RuntimeService/Version
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.232002276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad4ea558-6186-4fee-9340-6b731e004cbc name=/runtime.v1.RuntimeService/Version
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.233270002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5e8b813-359f-43da-8676-848f4219a39a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.233812475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767819233789217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5e8b813-359f-43da-8676-848f4219a39a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.234404856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81ef1868-ec14-4477-af2f-1a394a2b862b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.234458559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81ef1868-ec14-4477-af2f-1a394a2b862b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.234840488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81ef1868-ec14-4477-af2f-1a394a2b862b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.274479719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3b6a854-295a-40b1-a9ea-67676289a018 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.274589455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3b6a854-295a-40b1-a9ea-67676289a018 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.275736498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1804ef9-3bec-444c-8886-fe516393b50f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.276187675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767819276164856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1804ef9-3bec-444c-8886-fe516393b50f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.277077529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66e1d6ee-725a-4821-8d78-44cb2c13a869 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.277128713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66e1d6ee-725a-4821-8d78-44cb2c13a869 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:16:59 ha-942957 crio[674]: time="2024-03-18 13:16:59.277573023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66e1d6ee-725a-4821-8d78-44cb2c13a869 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bc6f97ca3edce       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a2d21119e214a       busybox-5b5d89c9d6-h4q2t
	3084769e1ff80       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   750ec46160c5a       kube-vip-ha-942957
	4fd6f28e01880       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   5fcb429680aac       storage-provisioner
	c859be2ef6bde       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   0b6911927b37f       coredns-5dd5756b68-f6dtz
	e2cf377b129d8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   3daf97324e58a       coredns-5dd5756b68-pbr9j
	488ea7fc9ea1f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       0                   5fcb429680aac       storage-provisioner
	3a01c2a33ecf6       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   abdb2ce8343b1       kindnet-6rgvl
	11bc6358bf6d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   c4b520f79bf4b       kube-proxy-97vsd
	3191b86575920       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Exited              kube-vip                  0                   750ec46160c5a       kube-vip-ha-942957
	09364d1b0b8ec       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   6e0049bc30922       kube-scheduler-ha-942957
	829af6255f575       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   67ed649bec722       kube-controller-manager-ha-942957
	ac909d1fea8aa       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   c9e7a1111cb30       etcd-ha-942957
	ff86796bcd151       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   c0a1a03e46a55       kube-apiserver-ha-942957
	
	
	==> coredns [c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67] <==
	[INFO] 127.0.0.1:50477 - 3303 "HINFO IN 7694853832209238896.6872666870011795296. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023717101s
	[INFO] 10.244.1.2:48458 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004729471s
	[INFO] 10.244.2.2:47528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307517s
	[INFO] 10.244.2.2:48138 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000123939s
	[INFO] 10.244.0.4:40261 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000082467s
	[INFO] 10.244.0.4:59741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000711436s
	[INFO] 10.244.1.2:33325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003796173s
	[INFO] 10.244.1.2:40118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184538s
	[INFO] 10.244.1.2:38695 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158047s
	[INFO] 10.244.2.2:39278 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539379s
	[INFO] 10.244.2.2:48574 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165918s
	[INFO] 10.244.0.4:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.0.4:50001 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135799s
	[INFO] 10.244.0.4:49373 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159584s
	[INFO] 10.244.1.2:44441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118463s
	[INFO] 10.244.2.2:42552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221661s
	[INFO] 10.244.2.2:46062 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090758s
	[INFO] 10.244.0.4:53179 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092569s
	[INFO] 10.244.1.2:45351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128077s
	[INFO] 10.244.1.2:52758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144551s
	[INFO] 10.244.1.2:47551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000203433s
	[INFO] 10.244.2.2:53980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115616s
	[INFO] 10.244.2.2:55318 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181469s
	[INFO] 10.244.0.4:60630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069346s
	[INFO] 10.244.0.4:41251 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040242s
	
	
	==> coredns [e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945] <==
	[INFO] 10.244.1.2:53196 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146942s
	[INFO] 10.244.2.2:41632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159168s
	[INFO] 10.244.2.2:46720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002275748s
	[INFO] 10.244.2.2:50733 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275044s
	[INFO] 10.244.2.2:37004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138849s
	[INFO] 10.244.2.2:33563 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224767s
	[INFO] 10.244.2.2:42566 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017421s
	[INFO] 10.244.0.4:54486 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00168008s
	[INFO] 10.244.0.4:46746 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363608s
	[INFO] 10.244.0.4:38530 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231105s
	[INFO] 10.244.0.4:47152 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045351s
	[INFO] 10.244.0.4:57247 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070307s
	[INFO] 10.244.1.2:43996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140398s
	[INFO] 10.244.1.2:36237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220389s
	[INFO] 10.244.1.2:37302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111738s
	[INFO] 10.244.2.2:58342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134629s
	[INFO] 10.244.2.2:43645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160061s
	[INFO] 10.244.0.4:58375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210567s
	[INFO] 10.244.0.4:50302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075795s
	[INFO] 10.244.0.4:46012 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084361s
	[INFO] 10.244.1.2:37085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000242114s
	[INFO] 10.244.2.2:47856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192734s
	[INFO] 10.244.2.2:42553 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213437s
	[INFO] 10.244.0.4:53951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102273s
	[INFO] 10.244.0.4:44758 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071111s
	
	
	==> describe nodes <==
	Name:               ha-942957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_10_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:10:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:16:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-942957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 98d7d2d7e6f44e39a7470fa399e42587
	  System UUID:                98d7d2d7-e6f4-4e39-a747-0fa399e42587
	  Boot ID:                    8d77322f-23ab-4abb-a476-3a13d0f588c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h4q2t             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-5dd5756b68-f6dtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m11s
	  kube-system                 coredns-5dd5756b68-pbr9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m11s
	  kube-system                 etcd-ha-942957                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 kindnet-6rgvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-apiserver-ha-942957             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-ha-942957    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-proxy-97vsd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-scheduler-ha-942957             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-vip-ha-942957                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s  kubelet          Node ha-942957 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s  kubelet          Node ha-942957 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s  kubelet          Node ha-942957 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal  NodeReady                6m6s   kubelet          Node ha-942957 status is now: NodeReady
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal  RegisteredNode           3m35s  node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	
	
	Name:               ha-942957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_12_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:11:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:14:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-942957-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 effa4806d9ac4aae93234a5f4797b41e
	  System UUID:                effa4806-d9ac-4aae-9323-4a5f4797b41e
	  Boot ID:                    7603b2ca-1020-4fd8-bd7f-eeda8ad1e754
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9qmdx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-942957-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m15s
	  kube-system                 kindnet-d4smn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m15s
	  kube-system                 kube-apiserver-ha-942957-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-controller-manager-ha-942957-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-vjmnr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-scheduler-ha-942957-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-942957-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m57s  kube-proxy       
	  Normal  RegisteredNode  4m46s  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode  3m35s  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  NodeNotReady    105s   node-controller  Node ha-942957-m02 status is now: NodeNotReady
	
	
	Name:               ha-942957-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_13_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    ha-942957-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec2118c8153b4c20b6861bbdce99bda8
	  System UUID:                ec2118c8-153b-4c20-b686-1bbdce99bda8
	  Boot ID:                    456dbdb3-b214-42f6-9f4d-35edec402cf9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-b64gc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-942957-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-4rf6r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-apiserver-ha-942957-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-ha-942957-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-rxtls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-scheduler-ha-942957-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-vip-ha-942957-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m50s  kube-proxy       
	  Normal  RegisteredNode  3m52s  node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal  RegisteredNode  3m51s  node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal  RegisteredNode  3m35s  node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	
	
	Name:               ha-942957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_14_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:14:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-942957-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16089a645be4a78a5280af4bb880ea8
	  System UUID:                b16089a6-45be-4a78-a528-0af4bb880ea8
	  Boot ID:                    61da23d5-a659-44da-b851-b354c3ec0a4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g4lxl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-gjnnp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m51s (x5 over 2m53s)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x5 over 2m53s)  kubelet          Node ha-942957-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x5 over 2m53s)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-942957-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar18 13:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042391] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541459] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Mar18 13:10] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.634426] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.300912] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.067435] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059503] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.165737] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136769] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.243119] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.843891] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.062146] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.956739] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.288333] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.601273] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.093539] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.596662] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.054967] kauditd_printk_skb: 53 callbacks suppressed
	[Mar18 13:11] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7] <==
	{"level":"warn","ts":"2024-03-18T13:16:59.582764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.593875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.598466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.603855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.616978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.625786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.635495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.640439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.644014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.653053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.663113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.673219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.677454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.681931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.690908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.698331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.703266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.705205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.70964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.715474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.722917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.729771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.73607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.78004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:16:59.803112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:16:59 up 7 min,  0 users,  load average: 0.10, 0.32, 0.19
	Linux ha-942957 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9] <==
	I0318 13:16:29.748013       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:16:39.755431       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:16:39.755560       1 main.go:227] handling current node
	I0318 13:16:39.755582       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:16:39.755600       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:39.755803       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:16:39.755841       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:16:39.755905       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:16:39.755923       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:16:49.772611       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:16:49.772823       1 main.go:227] handling current node
	I0318 13:16:49.772855       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:16:49.772878       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:49.773030       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:16:49.773052       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:16:49.773129       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:16:49.773164       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:16:59.784361       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:16:59.784390       1 main.go:227] handling current node
	I0318 13:16:59.784399       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:16:59.784404       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:59.784522       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:16:59.784527       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:16:59.784581       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:16:59.784585       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242] <==
	I0318 13:11:59.275888       1 trace.go:236] Trace[1922427888]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a35392cf-5b77-4800-bbe2-098cb914fa85,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-942957,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 13:11:53.920) (total time: 5355ms):
	Trace[1922427888]: ["GuaranteedUpdate etcd3" audit-id:a35392cf-5b77-4800-bbe2-098cb914fa85,key:/leases/kube-node-lease/ha-942957,type:*coordination.Lease,resource:leases.coordination.k8s.io 5355ms (13:11:53.920)
	Trace[1922427888]:  ---"Txn call completed" 5354ms (13:11:59.275)]
	Trace[1922427888]: [5.355567346s] [5.355567346s] END
	I0318 13:11:59.276140       1 trace.go:236] Trace[760804851]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:05d413b6-6c2c-4bfc-b063-8f6dc2192f21,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-bvovfeqgqy4akpxvecqne7xhka,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 13:11:53.342) (total time: 5933ms):
	Trace[760804851]: ["GuaranteedUpdate etcd3" audit-id:05d413b6-6c2c-4bfc-b063-8f6dc2192f21,key:/leases/kube-system/apiserver-bvovfeqgqy4akpxvecqne7xhka,type:*coordination.Lease,resource:leases.coordination.k8s.io 5933ms (13:11:53.342)
	Trace[760804851]:  ---"Txn call completed" 5932ms (13:11:59.276)]
	Trace[760804851]: [5.933480347s] [5.933480347s] END
	I0318 13:11:59.276443       1 trace.go:236] Trace[1340811478]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:c1a47ab2-169f-4e2d-b4ae-46a43d2ed2a9,client:192.168.39.68,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-942957-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (18-Mar-2024 13:11:57.576) (total time: 1699ms):
	Trace[1340811478]: ["GuaranteedUpdate etcd3" audit-id:c1a47ab2-169f-4e2d-b4ae-46a43d2ed2a9,key:/minions/ha-942957-m02,type:*core.Node,resource:nodes 1698ms (13:11:57.577)
	Trace[1340811478]:  ---"Txn call completed" 1693ms (13:11:59.272)]
	Trace[1340811478]: ---"About to apply patch" 1694ms (13:11:59.272)
	Trace[1340811478]: [1.699413207s] [1.699413207s] END
	I0318 13:11:59.316127       1 trace.go:236] Trace[411801182]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:908e867f-8aac-40f7-b9fe-590307c5397c,client:192.168.39.22,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:11:54.041) (total time: 5274ms):
	Trace[411801182]: [5.27471743s] [5.27471743s] END
	I0318 13:11:59.321132       1 trace.go:236] Trace[513579136]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:22e0975e-3f6d-4dc9-9154-9600cfc3e415,client:192.168.39.22,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:11:53.038) (total time: 6282ms):
	Trace[513579136]: [6.282712075s] [6.282712075s] END
	I0318 13:11:59.324401       1 trace.go:236] Trace[664548839]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4a4bb367-da40-41c5-8033-86e3ec397d2d,client:192.168.39.22,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:11:52.030) (total time: 7294ms):
	Trace[664548839]: [7.294334356s] [7.294334356s] END
	I0318 13:14:09.515140       1 trace.go:236] Trace[215917531]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2ac5b2dc-04bb-41a0-8259-194f167bd578,client:192.168.39.221,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:14:08.822) (total time: 692ms):
	Trace[215917531]: ---"Write to database call succeeded" len:145 692ms (13:14:09.514)
	Trace[215917531]: [692.580604ms] [692.580604ms] END
	I0318 13:14:09.519469       1 trace.go:236] Trace[1808083093]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1db19c4f-3835-4305-bf1a-126052ca1a0e,client:192.168.39.221,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:14:08.823) (total time: 695ms):
	Trace[1808083093]: ---"Write to database call succeeded" len:148 695ms (13:14:09.519)
	Trace[1808083093]: [695.996137ms] [695.996137ms] END
	
	
	==> kube-controller-manager [829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1] <==
	I0318 13:13:27.216129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="139.844µs"
	I0318 13:13:27.328104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.08951ms"
	I0318 13:13:27.328338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.875µs"
	I0318 13:13:29.457159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.166694ms"
	I0318 13:13:29.457758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="242.285µs"
	I0318 13:13:29.942857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.942457ms"
	I0318 13:13:29.961459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.518404ms"
	I0318 13:13:29.962867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="506.786µs"
	I0318 13:13:30.055943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.79244ms"
	I0318 13:13:30.058422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.191µs"
	E0318 13:14:06.707244       1 certificate_controller.go:146] Sync csr-rzs6k failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-rzs6k": the object has been modified; please apply your changes to the latest version and try again
	E0318 13:14:06.718825       1 certificate_controller.go:146] Sync csr-rzs6k failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-rzs6k": the object has been modified; please apply your changes to the latest version and try again
	I0318 13:14:08.228253       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-942957-m04\" does not exist"
	I0318 13:14:08.250180       1 range_allocator.go:380] "Set node PodCIDR" node="ha-942957-m04" podCIDRs=["10.244.3.0/24"]
	I0318 13:14:08.284345       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tvmh7"
	I0318 13:14:08.284730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g4lxl"
	I0318 13:14:08.459903       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-84mtv"
	I0318 13:14:08.468335       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-z2gzx"
	I0318 13:14:08.563970       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-fg4h5"
	I0318 13:14:12.498955       1 event.go:307] "Event occurred" object="ha-942957-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller"
	I0318 13:14:12.525622       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-942957-m04"
	I0318 13:14:17.193148       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-942957-m04"
	I0318 13:15:14.771604       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-942957-m04"
	I0318 13:15:14.884523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.139417ms"
	I0318 13:15:14.884721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="113.102µs"
	
	
	==> kube-proxy [11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1] <==
	I0318 13:10:50.316906       1 server_others.go:69] "Using iptables proxy"
	I0318 13:10:50.334356       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0318 13:10:50.377482       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:10:50.377528       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:10:50.380218       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:10:50.380333       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:10:50.380556       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:10:50.380608       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:50.382751       1 config.go:188] "Starting service config controller"
	I0318 13:10:50.383144       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:10:50.383193       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:10:50.383198       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:10:50.384291       1 config.go:315] "Starting node config controller"
	I0318 13:10:50.384323       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:10:50.483809       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:10:50.483940       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:10:50.484417       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99] <==
	W0318 13:10:32.689633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:10:32.689805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:10:32.784751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:10:32.784934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:10:32.818989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:10:32.819207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:10:32.870024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:10:32.870081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:10:32.880608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:10:32.881010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:10:32.893396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:10:32.893502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:10:33.002949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:10:33.003142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:10:33.017021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:10:33.017071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:34.299125       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 13:13:07.299612       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g8dzj\": pod kube-proxy-g8dzj is already assigned to node \"ha-942957-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g8dzj" node="ha-942957-m03"
	E0318 13:13:07.300066       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 840bd016-9a33-4eea-90ed-324f143b9dac(kube-system/kube-proxy-g8dzj) wasn't assumed so cannot be forgotten"
	E0318 13:13:07.300226       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g8dzj\": pod kube-proxy-g8dzj is already assigned to node \"ha-942957-m03\"" pod="kube-system/kube-proxy-g8dzj"
	I0318 13:13:07.300450       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-g8dzj" node="ha-942957-m03"
	E0318 13:14:08.326079       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-g4lxl\": pod kindnet-g4lxl is already assigned to node \"ha-942957-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-g4lxl" node="ha-942957-m04"
	E0318 13:14:08.328913       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 2797cae1-24be-4e84-a8ee-39572432d9b6(kube-system/kindnet-g4lxl) wasn't assumed so cannot be forgotten"
	E0318 13:14:08.329041       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-g4lxl\": pod kindnet-g4lxl is already assigned to node \"ha-942957-m04\"" pod="kube-system/kindnet-g4lxl"
	I0318 13:14:08.329098       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-g4lxl" node="ha-942957-m04"
	
	
	==> kubelet <==
	Mar 18 13:12:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:12:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:12:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:13:26 ha-942957 kubelet[1368]: I0318 13:13:26.779511    1368 topology_manager.go:215] "Topology Admit Handler" podUID="19f21998-36db-4286-8e31-bf260f71ea46" podNamespace="default" podName="busybox-5b5d89c9d6-h4q2t"
	Mar 18 13:13:26 ha-942957 kubelet[1368]: I0318 13:13:26.931872    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkfz\" (UniqueName: \"kubernetes.io/projected/19f21998-36db-4286-8e31-bf260f71ea46-kube-api-access-skkfz\") pod \"busybox-5b5d89c9d6-h4q2t\" (UID: \"19f21998-36db-4286-8e31-bf260f71ea46\") " pod="default/busybox-5b5d89c9d6-h4q2t"
	Mar 18 13:13:35 ha-942957 kubelet[1368]: E0318 13:13:35.068304    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:13:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:13:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:13:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:13:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:14:35 ha-942957 kubelet[1368]: E0318 13:14:35.069918    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:14:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:14:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:14:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:14:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:15:35 ha-942957 kubelet[1368]: E0318 13:15:35.071131    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:15:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:15:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:15:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:15:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:16:35 ha-942957 kubelet[1368]: E0318 13:16:35.067730    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:16:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:16:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:16:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:16:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-942957 -n ha-942957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-942957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (3.197901656s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:04.433817 1090340 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:04.433966 1090340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:04.433977 1090340 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:04.433981 1090340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:04.434198 1090340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:04.434732 1090340 out.go:298] Setting JSON to false
	I0318 13:17:04.434793 1090340 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:04.435512 1090340 notify.go:220] Checking for updates...
	I0318 13:17:04.436077 1090340 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:04.436097 1090340 status.go:255] checking status of ha-942957 ...
	I0318 13:17:04.436560 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:04.436615 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:04.453482 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I0318 13:17:04.453989 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:04.454530 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:04.454554 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:04.454946 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:04.455179 1090340 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:04.456745 1090340 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:04.456766 1090340 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:04.457057 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:04.457099 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:04.473392 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0318 13:17:04.473904 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:04.474407 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:04.474432 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:04.474753 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:04.474936 1090340 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:04.477753 1090340 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:04.478141 1090340 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:04.478169 1090340 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:04.478321 1090340 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:04.478730 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:04.478787 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:04.494282 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0318 13:17:04.494685 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:04.495116 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:04.495139 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:04.495490 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:04.495662 1090340 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:04.495875 1090340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:04.495900 1090340 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:04.498524 1090340 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:04.498878 1090340 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:04.498917 1090340 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:04.499005 1090340 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:04.499164 1090340 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:04.499333 1090340 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:04.499518 1090340 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:04.590624 1090340 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:04.599429 1090340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:04.617351 1090340 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:04.617385 1090340 api_server.go:166] Checking apiserver status ...
	I0318 13:17:04.617430 1090340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:04.635720 1090340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:04.647710 1090340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:04.647791 1090340 ssh_runner.go:195] Run: ls
	I0318 13:17:04.652966 1090340 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:04.657617 1090340 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:04.657654 1090340 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:04.657673 1090340 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:04.657690 1090340 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:04.658002 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:04.658039 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:04.674599 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0318 13:17:04.675069 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:04.675581 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:04.675600 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:04.675935 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:04.676141 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:04.677714 1090340 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:17:04.677737 1090340 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:04.678036 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:04.678081 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:04.694454 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0318 13:17:04.694937 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:04.695423 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:04.695444 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:04.695738 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:04.695967 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:17:04.698909 1090340 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:04.699389 1090340 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:04.699417 1090340 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:04.699540 1090340 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:04.699889 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:04.699957 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:04.716402 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0318 13:17:04.716883 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:04.717442 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:04.717469 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:04.717766 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:04.717953 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:17:04.718174 1090340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:04.718196 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:17:04.721512 1090340 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:04.721971 1090340 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:04.722012 1090340 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:04.722200 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:17:04.722364 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:17:04.722534 1090340 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:17:04.722718 1090340 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	W0318 13:17:07.208126 1090340 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:07.208249 1090340 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0318 13:17:07.208268 1090340 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:07.208277 1090340 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:17:07.208296 1090340 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:07.208304 1090340 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:07.208633 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:07.208687 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:07.225000 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0318 13:17:07.225456 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:07.225931 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:07.225955 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:07.226316 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:07.226509 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:07.228028 1090340 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:07.228049 1090340 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:07.228405 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:07.228461 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:07.244463 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0318 13:17:07.244872 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:07.245437 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:07.245468 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:07.245785 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:07.246002 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:07.248874 1090340 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:07.249318 1090340 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:07.249359 1090340 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:07.249458 1090340 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:07.249823 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:07.249868 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:07.264971 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0318 13:17:07.265495 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:07.266036 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:07.266065 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:07.266479 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:07.266729 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:07.266990 1090340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:07.267022 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:07.270096 1090340 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:07.270576 1090340 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:07.270619 1090340 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:07.270783 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:07.270986 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:07.271167 1090340 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:07.271318 1090340 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:07.351972 1090340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:07.367009 1090340 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:07.367039 1090340 api_server.go:166] Checking apiserver status ...
	I0318 13:17:07.367075 1090340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:07.388366 1090340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:07.398844 1090340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:07.398907 1090340 ssh_runner.go:195] Run: ls
	I0318 13:17:07.403591 1090340 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:07.408400 1090340 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:07.408428 1090340 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:07.408439 1090340 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:07.408460 1090340 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:07.408770 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:07.408820 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:07.425217 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0318 13:17:07.425721 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:07.426427 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:07.426473 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:07.426865 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:07.427111 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:07.428939 1090340 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:07.428967 1090340 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:07.429424 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:07.429484 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:07.445420 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0318 13:17:07.445918 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:07.446540 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:07.446570 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:07.446926 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:07.447140 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:07.449979 1090340 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:07.450414 1090340 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:07.450453 1090340 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:07.450593 1090340 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:07.450940 1090340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:07.451006 1090340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:07.466367 1090340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0318 13:17:07.466837 1090340 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:07.467386 1090340 main.go:141] libmachine: Using API Version  1
	I0318 13:17:07.467412 1090340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:07.467736 1090340 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:07.467922 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:07.468109 1090340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:07.468128 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:07.471024 1090340 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:07.471507 1090340 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:07.471534 1090340 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:07.471719 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:07.471931 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:07.472133 1090340 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:07.472297 1090340 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:07.558404 1090340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:07.573260 1090340 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (4.931793392s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:08.840507 1090435 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:08.840683 1090435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:08.840694 1090435 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:08.840700 1090435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:08.840965 1090435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:08.841199 1090435 out.go:298] Setting JSON to false
	I0318 13:17:08.841248 1090435 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:08.841370 1090435 notify.go:220] Checking for updates...
	I0318 13:17:08.841659 1090435 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:08.841679 1090435 status.go:255] checking status of ha-942957 ...
	I0318 13:17:08.842125 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:08.842191 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:08.860186 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I0318 13:17:08.860774 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:08.861523 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:08.861559 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:08.861934 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:08.862153 1090435 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:08.864074 1090435 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:08.864092 1090435 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:08.864409 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:08.864459 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:08.880577 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0318 13:17:08.881047 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:08.881539 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:08.881563 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:08.881911 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:08.882137 1090435 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:08.885357 1090435 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:08.885875 1090435 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:08.885911 1090435 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:08.885988 1090435 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:08.886431 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:08.886480 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:08.903133 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0318 13:17:08.903600 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:08.904230 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:08.904258 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:08.904633 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:08.904866 1090435 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:08.905068 1090435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:08.905104 1090435 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:08.908294 1090435 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:08.908760 1090435 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:08.908807 1090435 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:08.908916 1090435 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:08.909136 1090435 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:08.909300 1090435 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:08.909449 1090435 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:08.992202 1090435 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:08.998902 1090435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:09.017260 1090435 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:09.017292 1090435 api_server.go:166] Checking apiserver status ...
	I0318 13:17:09.017327 1090435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:09.037297 1090435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:09.048226 1090435 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:09.048296 1090435 ssh_runner.go:195] Run: ls
	I0318 13:17:09.052939 1090435 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:09.060145 1090435 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:09.060188 1090435 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:09.060200 1090435 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:09.060217 1090435 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:09.060681 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:09.060731 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:09.076601 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35571
	I0318 13:17:09.077099 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:09.077629 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:09.077654 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:09.078041 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:09.078251 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:09.080114 1090435 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:17:09.080134 1090435 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:09.080862 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:09.080906 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:09.098949 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0318 13:17:09.099425 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:09.100029 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:09.100063 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:09.100439 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:09.100689 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:17:09.103445 1090435 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:09.103935 1090435 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:09.103970 1090435 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:09.104115 1090435 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:09.104474 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:09.104520 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:09.119708 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45039
	I0318 13:17:09.120237 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:09.120830 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:09.120855 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:09.121307 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:09.121540 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:17:09.121798 1090435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:09.121829 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:17:09.124849 1090435 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:09.125312 1090435 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:09.125338 1090435 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:09.125514 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:17:09.125708 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:17:09.125898 1090435 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:17:09.126045 1090435 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	W0318 13:17:10.276138 1090435 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:10.276221 1090435 retry.go:31] will retry after 312.969449ms: dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:13.348176 1090435 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:13.348309 1090435 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0318 13:17:13.348339 1090435 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:13.348351 1090435 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:17:13.348385 1090435 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:13.348400 1090435 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:13.348764 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:13.348821 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:13.364103 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35509
	I0318 13:17:13.364607 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:13.365213 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:13.365239 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:13.365611 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:13.365822 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:13.367531 1090435 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:13.367554 1090435 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:13.367899 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:13.367937 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:13.384040 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0318 13:17:13.384478 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:13.385021 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:13.385048 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:13.385398 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:13.385638 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:13.388784 1090435 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:13.389249 1090435 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:13.389276 1090435 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:13.389576 1090435 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:13.389916 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:13.389965 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:13.405346 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0318 13:17:13.405818 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:13.406333 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:13.406356 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:13.406698 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:13.406897 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:13.407186 1090435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:13.407213 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:13.410204 1090435 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:13.410623 1090435 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:13.410681 1090435 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:13.410798 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:13.410998 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:13.411158 1090435 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:13.411314 1090435 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:13.491744 1090435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:13.509292 1090435 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:13.509332 1090435 api_server.go:166] Checking apiserver status ...
	I0318 13:17:13.509377 1090435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:13.527090 1090435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:13.536881 1090435 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:13.536938 1090435 ssh_runner.go:195] Run: ls
	I0318 13:17:13.541368 1090435 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:13.546170 1090435 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:13.546193 1090435 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:13.546203 1090435 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:13.546218 1090435 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:13.546504 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:13.546536 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:13.562124 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I0318 13:17:13.562587 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:13.563334 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:13.563359 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:13.563762 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:13.563966 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:13.565723 1090435 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:13.565744 1090435 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:13.566032 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:13.566083 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:13.582250 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0318 13:17:13.582718 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:13.583200 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:13.583220 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:13.583520 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:13.583715 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:13.586652 1090435 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:13.587048 1090435 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:13.587081 1090435 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:13.587243 1090435 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:13.587623 1090435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:13.587684 1090435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:13.602990 1090435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I0318 13:17:13.603367 1090435 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:13.603980 1090435 main.go:141] libmachine: Using API Version  1
	I0318 13:17:13.604010 1090435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:13.604379 1090435 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:13.604589 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:13.604765 1090435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:13.604800 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:13.607634 1090435 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:13.608080 1090435 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:13.608107 1090435 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:13.608248 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:13.608437 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:13.608584 1090435 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:13.608721 1090435 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:13.692096 1090435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:13.706920 1090435 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (4.959635306s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:14.956064 1090531 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:14.956359 1090531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:14.956370 1090531 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:14.956377 1090531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:14.956578 1090531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:14.956791 1090531 out.go:298] Setting JSON to false
	I0318 13:17:14.956848 1090531 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:14.956962 1090531 notify.go:220] Checking for updates...
	I0318 13:17:14.957260 1090531 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:14.957279 1090531 status.go:255] checking status of ha-942957 ...
	I0318 13:17:14.957699 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:14.957770 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:14.975916 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0318 13:17:14.976436 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:14.977084 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:14.977105 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:14.977440 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:14.977628 1090531 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:14.979386 1090531 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:14.979407 1090531 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:14.979749 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:14.979791 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:14.995354 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0318 13:17:14.995809 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:14.996390 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:14.996424 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:14.996802 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:14.997035 1090531 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:15.000254 1090531 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:15.000682 1090531 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:15.000704 1090531 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:15.000880 1090531 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:15.001196 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:15.001234 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:15.018595 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0318 13:17:15.019147 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:15.019775 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:15.019806 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:15.020244 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:15.020491 1090531 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:15.020711 1090531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:15.020756 1090531 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:15.024000 1090531 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:15.024450 1090531 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:15.024472 1090531 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:15.024634 1090531 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:15.024833 1090531 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:15.025016 1090531 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:15.025145 1090531 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:15.112979 1090531 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:15.120213 1090531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:15.139847 1090531 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:15.139882 1090531 api_server.go:166] Checking apiserver status ...
	I0318 13:17:15.139920 1090531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:15.155098 1090531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:15.165254 1090531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:15.165325 1090531 ssh_runner.go:195] Run: ls
	I0318 13:17:15.170505 1090531 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:15.179273 1090531 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:15.179314 1090531 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:15.179328 1090531 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:15.179351 1090531 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:15.179675 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:15.179725 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:15.195143 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0318 13:17:15.195673 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:15.196273 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:15.196298 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:15.196609 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:15.196824 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:15.198399 1090531 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:17:15.198422 1090531 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:15.198746 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:15.198799 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:15.214946 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0318 13:17:15.215384 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:15.215875 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:15.215899 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:15.216322 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:15.216519 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:17:15.219439 1090531 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:15.219957 1090531 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:15.219986 1090531 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:15.220100 1090531 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:15.220434 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:15.220472 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:15.236696 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0318 13:17:15.237135 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:15.237610 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:15.237634 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:15.237979 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:15.238187 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:17:15.238420 1090531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:15.238448 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:17:15.240885 1090531 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:15.241286 1090531 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:15.241316 1090531 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:15.241439 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:17:15.241607 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:17:15.241755 1090531 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:17:15.241927 1090531 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	W0318 13:17:16.420186 1090531 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:16.420277 1090531 retry.go:31] will retry after 236.059224ms: dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:19.492149 1090531 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:19.492249 1090531 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0318 13:17:19.492269 1090531 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:19.492278 1090531 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:17:19.492312 1090531 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:19.492319 1090531 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:19.492643 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:19.492689 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:19.508415 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44953
	I0318 13:17:19.508924 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:19.509466 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:19.509495 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:19.509819 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:19.510044 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:19.511554 1090531 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:19.511571 1090531 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:19.511882 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:19.511938 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:19.527780 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36475
	I0318 13:17:19.528376 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:19.528895 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:19.528922 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:19.529280 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:19.529479 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:19.532564 1090531 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:19.532980 1090531 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:19.533013 1090531 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:19.533218 1090531 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:19.533553 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:19.533596 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:19.550513 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0318 13:17:19.550970 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:19.551455 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:19.551487 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:19.551769 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:19.552024 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:19.552285 1090531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:19.552310 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:19.555008 1090531 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:19.555501 1090531 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:19.555541 1090531 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:19.555685 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:19.555859 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:19.556013 1090531 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:19.556154 1090531 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:19.636537 1090531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:19.653344 1090531 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:19.653374 1090531 api_server.go:166] Checking apiserver status ...
	I0318 13:17:19.653409 1090531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:19.670238 1090531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:19.680806 1090531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:19.680883 1090531 ssh_runner.go:195] Run: ls
	I0318 13:17:19.685999 1090531 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:19.690773 1090531 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:19.690805 1090531 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:19.690815 1090531 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:19.690831 1090531 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:19.691139 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:19.691184 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:19.707582 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0318 13:17:19.708082 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:19.708604 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:19.708631 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:19.708989 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:19.709180 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:19.710869 1090531 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:19.710890 1090531 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:19.711182 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:19.711219 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:19.726660 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I0318 13:17:19.727155 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:19.727659 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:19.727685 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:19.728055 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:19.728298 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:19.731176 1090531 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:19.731650 1090531 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:19.731689 1090531 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:19.731852 1090531 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:19.732151 1090531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:19.732192 1090531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:19.747532 1090531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0318 13:17:19.748025 1090531 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:19.748461 1090531 main.go:141] libmachine: Using API Version  1
	I0318 13:17:19.748482 1090531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:19.748801 1090531 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:19.748998 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:19.749226 1090531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:19.749257 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:19.752111 1090531 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:19.752477 1090531 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:19.752508 1090531 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:19.752655 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:19.752830 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:19.752961 1090531 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:19.753101 1090531 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:19.835579 1090531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:19.850027 1090531 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (4.16418309s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:22.138910 1090627 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:22.139067 1090627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:22.139079 1090627 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:22.139086 1090627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:22.139290 1090627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:22.139489 1090627 out.go:298] Setting JSON to false
	I0318 13:17:22.139537 1090627 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:22.139592 1090627 notify.go:220] Checking for updates...
	I0318 13:17:22.139973 1090627 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:22.139992 1090627 status.go:255] checking status of ha-942957 ...
	I0318 13:17:22.140478 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:22.140546 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:22.161350 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0318 13:17:22.161885 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:22.162448 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:22.162473 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:22.162960 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:22.163169 1090627 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:22.164940 1090627 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:22.164961 1090627 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:22.165259 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:22.165312 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:22.181837 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 13:17:22.182289 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:22.182826 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:22.182851 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:22.183216 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:22.183527 1090627 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:22.186474 1090627 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:22.186901 1090627 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:22.186949 1090627 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:22.187139 1090627 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:22.187509 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:22.187546 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:22.202728 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0318 13:17:22.203158 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:22.203715 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:22.203738 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:22.204114 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:22.204333 1090627 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:22.204560 1090627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:22.204602 1090627 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:22.207325 1090627 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:22.207790 1090627 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:22.207808 1090627 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:22.207987 1090627 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:22.208192 1090627 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:22.208339 1090627 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:22.208479 1090627 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:22.293040 1090627 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:22.299632 1090627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:22.317566 1090627 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:22.317606 1090627 api_server.go:166] Checking apiserver status ...
	I0318 13:17:22.317653 1090627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:22.333558 1090627 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:22.344439 1090627 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:22.344496 1090627 ssh_runner.go:195] Run: ls
	I0318 13:17:22.348874 1090627 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:22.353583 1090627 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:22.353605 1090627 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:22.353615 1090627 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:22.353631 1090627 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:22.353965 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:22.354003 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:22.370301 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0318 13:17:22.370745 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:22.371282 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:22.371304 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:22.371638 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:22.371839 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:22.373301 1090627 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:17:22.373320 1090627 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:22.373751 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:22.373793 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:22.389436 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0318 13:17:22.389879 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:22.390358 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:22.390381 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:22.390689 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:22.390871 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:17:22.393557 1090627 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:22.394007 1090627 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:22.394032 1090627 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:22.394188 1090627 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:22.394491 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:22.394533 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:22.410267 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0318 13:17:22.410676 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:22.411172 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:22.411205 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:22.411523 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:22.411734 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:17:22.411998 1090627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:22.412027 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:17:22.414857 1090627 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:22.415380 1090627 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:22.415402 1090627 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:22.415625 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:17:22.415804 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:17:22.415982 1090627 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:17:22.416140 1090627 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	W0318 13:17:22.564098 1090627 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:22.564193 1090627 retry.go:31] will retry after 227.758845ms: dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:25.860128 1090627 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:25.860246 1090627 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0318 13:17:25.860264 1090627 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:25.860272 1090627 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:17:25.860294 1090627 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:25.860301 1090627 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:25.860625 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:25.860688 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:25.876200 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0318 13:17:25.876702 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:25.877256 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:25.877283 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:25.877636 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:25.877828 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:25.879496 1090627 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:25.879514 1090627 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:25.879821 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:25.879876 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:25.895197 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0318 13:17:25.895712 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:25.896296 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:25.896328 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:25.896735 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:25.896961 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:25.899757 1090627 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:25.900164 1090627 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:25.900214 1090627 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:25.900330 1090627 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:25.900642 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:25.900687 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:25.916334 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0318 13:17:25.916968 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:25.917626 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:25.917653 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:25.918013 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:25.918209 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:25.918427 1090627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:25.918455 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:25.921615 1090627 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:25.922149 1090627 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:25.922184 1090627 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:25.922315 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:25.922522 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:25.922721 1090627 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:25.922884 1090627 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:26.013438 1090627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:26.031120 1090627 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:26.031192 1090627 api_server.go:166] Checking apiserver status ...
	I0318 13:17:26.031251 1090627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:26.047137 1090627 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:26.059023 1090627 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:26.059104 1090627 ssh_runner.go:195] Run: ls
	I0318 13:17:26.065255 1090627 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:26.070497 1090627 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:26.070528 1090627 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:26.070545 1090627 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:26.070566 1090627 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:26.070920 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:26.070964 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:26.086424 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I0318 13:17:26.086955 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:26.087446 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:26.087471 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:26.087858 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:26.088047 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:26.089818 1090627 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:26.089840 1090627 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:26.090221 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:26.090271 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:26.106429 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0318 13:17:26.106850 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:26.107434 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:26.107464 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:26.107926 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:26.108173 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:26.111105 1090627 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:26.111548 1090627 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:26.111582 1090627 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:26.111726 1090627 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:26.112139 1090627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:26.112190 1090627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:26.133005 1090627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41345
	I0318 13:17:26.133726 1090627 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:26.134287 1090627 main.go:141] libmachine: Using API Version  1
	I0318 13:17:26.134316 1090627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:26.134762 1090627 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:26.135151 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:26.135377 1090627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:26.135403 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:26.138775 1090627 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:26.139222 1090627 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:26.139252 1090627 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:26.139530 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:26.139769 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:26.139958 1090627 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:26.140175 1090627 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:26.224459 1090627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:26.240137 1090627 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (3.757692978s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:30.910178 1090733 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:30.910352 1090733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:30.910363 1090733 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:30.910370 1090733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:30.910597 1090733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:30.910836 1090733 out.go:298] Setting JSON to false
	I0318 13:17:30.910894 1090733 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:30.910999 1090733 notify.go:220] Checking for updates...
	I0318 13:17:30.911298 1090733 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:30.911318 1090733 status.go:255] checking status of ha-942957 ...
	I0318 13:17:30.911811 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:30.911889 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:30.928084 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41621
	I0318 13:17:30.928627 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:30.929325 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:30.929355 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:30.929842 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:30.930122 1090733 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:30.932112 1090733 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:30.932136 1090733 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:30.932522 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:30.932577 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:30.947969 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:17:30.948503 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:30.949015 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:30.949050 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:30.949400 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:30.949612 1090733 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:30.952491 1090733 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:30.952972 1090733 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:30.953006 1090733 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:30.953169 1090733 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:30.953471 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:30.953511 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:30.969676 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0318 13:17:30.970049 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:30.970503 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:30.970526 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:30.970865 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:30.971240 1090733 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:30.971476 1090733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:30.971517 1090733 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:30.974873 1090733 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:30.975369 1090733 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:30.975397 1090733 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:30.975528 1090733 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:30.975713 1090733 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:30.975903 1090733 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:30.976067 1090733 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:31.056936 1090733 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:31.064157 1090733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:31.079921 1090733 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:31.079953 1090733 api_server.go:166] Checking apiserver status ...
	I0318 13:17:31.079990 1090733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:31.096691 1090733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:31.112778 1090733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:31.112843 1090733 ssh_runner.go:195] Run: ls
	I0318 13:17:31.117929 1090733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:31.125243 1090733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:31.125280 1090733 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:31.125290 1090733 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:31.125316 1090733 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:31.125795 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:31.125846 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:31.142977 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0318 13:17:31.143458 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:31.143995 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:31.144024 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:31.144376 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:31.144659 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:31.146590 1090733 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:17:31.146613 1090733 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:31.147032 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:31.147085 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:31.164674 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0318 13:17:31.165108 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:31.165695 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:31.165728 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:31.166093 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:31.166326 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:17:31.169872 1090733 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:31.170340 1090733 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:31.170365 1090733 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:31.170509 1090733 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:17:31.170867 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:31.170913 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:31.187066 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36463
	I0318 13:17:31.187498 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:31.188040 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:31.188066 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:31.188415 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:31.188642 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:17:31.188841 1090733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:31.188865 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:17:31.192140 1090733 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:31.192560 1090733 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:17:31.192583 1090733 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:17:31.192748 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:17:31.192914 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:17:31.193103 1090733 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:17:31.193290 1090733 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	W0318 13:17:34.248117 1090733 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.22:22: connect: no route to host
	W0318 13:17:34.248232 1090733 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E0318 13:17:34.248252 1090733 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:34.248259 1090733 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:17:34.248276 1090733 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	I0318 13:17:34.248290 1090733 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:34.248599 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:34.248642 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:34.264018 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0318 13:17:34.264471 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:34.264995 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:34.265018 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:34.265432 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:34.265670 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:34.267259 1090733 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:34.267281 1090733 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:34.267568 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:34.267621 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:34.283745 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0318 13:17:34.284384 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:34.284982 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:34.285011 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:34.285435 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:34.285683 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:34.289127 1090733 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:34.289453 1090733 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:34.289474 1090733 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:34.289573 1090733 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:34.289975 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:34.290023 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:34.304681 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0318 13:17:34.305103 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:34.305533 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:34.305558 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:34.305879 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:34.306120 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:34.306331 1090733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:34.306354 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:34.308875 1090733 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:34.309342 1090733 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:34.309383 1090733 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:34.309516 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:34.309694 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:34.309853 1090733 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:34.309969 1090733 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:34.395331 1090733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:34.412800 1090733 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:34.412833 1090733 api_server.go:166] Checking apiserver status ...
	I0318 13:17:34.412873 1090733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:34.428139 1090733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:34.438569 1090733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:34.438625 1090733 ssh_runner.go:195] Run: ls
	I0318 13:17:34.443347 1090733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:34.450702 1090733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:34.450732 1090733 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:34.450745 1090733 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:34.450771 1090733 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:34.451071 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:34.451129 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:34.466385 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42265
	I0318 13:17:34.466814 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:34.467349 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:34.467384 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:34.467739 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:34.468085 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:34.469691 1090733 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:34.469713 1090733 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:34.470019 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:34.470053 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:34.484998 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0318 13:17:34.485400 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:34.486132 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:34.486151 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:34.486476 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:34.486703 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:34.489760 1090733 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:34.490137 1090733 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:34.490175 1090733 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:34.490351 1090733 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:34.490760 1090733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:34.490813 1090733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:34.505507 1090733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0318 13:17:34.505951 1090733 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:34.506464 1090733 main.go:141] libmachine: Using API Version  1
	I0318 13:17:34.506489 1090733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:34.506832 1090733 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:34.507029 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:34.507228 1090733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:34.507253 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:34.510203 1090733 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:34.510731 1090733 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:34.510757 1090733 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:34.510887 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:34.511057 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:34.511221 1090733 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:34.511351 1090733 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:34.592443 1090733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:34.607954 1090733 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0318 13:17:37.320099 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 7 (698.00186ms)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:41.251286 1090861 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:41.251623 1090861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:41.251635 1090861 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:41.251640 1090861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:41.251874 1090861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:41.252086 1090861 out.go:298] Setting JSON to false
	I0318 13:17:41.252129 1090861 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:41.252247 1090861 notify.go:220] Checking for updates...
	I0318 13:17:41.252520 1090861 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:41.252537 1090861 status.go:255] checking status of ha-942957 ...
	I0318 13:17:41.252940 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.253002 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.271682 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0318 13:17:41.272179 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.272831 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.272872 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.273318 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.273610 1090861 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:41.275863 1090861 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:41.275887 1090861 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:41.276241 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.276278 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.291734 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0318 13:17:41.292229 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.292738 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.292770 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.293101 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.293282 1090861 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:41.296648 1090861 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:41.297176 1090861 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:41.297210 1090861 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:41.297352 1090861 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:41.297705 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.297750 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.312651 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0318 13:17:41.313066 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.313530 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.313552 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.313897 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.314124 1090861 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:41.314330 1090861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:41.314354 1090861 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:41.316989 1090861 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:41.317496 1090861 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:41.317529 1090861 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:41.317654 1090861 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:41.317829 1090861 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:41.317976 1090861 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:41.318127 1090861 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:41.405449 1090861 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:41.412280 1090861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:41.433759 1090861 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:41.433797 1090861 api_server.go:166] Checking apiserver status ...
	I0318 13:17:41.433858 1090861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:41.451563 1090861 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:41.462470 1090861 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:41.462529 1090861 ssh_runner.go:195] Run: ls
	I0318 13:17:41.467282 1090861 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:41.474265 1090861 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:41.474297 1090861 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:41.474309 1090861 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:41.474343 1090861 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:41.474681 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.474724 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.490185 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42335
	I0318 13:17:41.490634 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.491138 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.491166 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.491508 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.491741 1090861 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:41.493317 1090861 status.go:330] ha-942957-m02 host status = "Stopped" (err=<nil>)
	I0318 13:17:41.493337 1090861 status.go:343] host is not running, skipping remaining checks
	I0318 13:17:41.493345 1090861 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:41.493368 1090861 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:41.493677 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.493761 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.509495 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0318 13:17:41.509946 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.510510 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.510534 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.510929 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.511181 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:41.512881 1090861 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:41.512905 1090861 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:41.513269 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.513321 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.529363 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0318 13:17:41.529827 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.530342 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.530367 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.530770 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.530995 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:41.533828 1090861 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:41.534234 1090861 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:41.534265 1090861 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:41.534417 1090861 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:41.534754 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.534811 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.550189 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0318 13:17:41.550607 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.551138 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.551160 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.551454 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.551646 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:41.551883 1090861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:41.551911 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:41.554707 1090861 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:41.555199 1090861 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:41.555230 1090861 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:41.555381 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:41.555584 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:41.555742 1090861 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:41.555903 1090861 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:41.636917 1090861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:41.660015 1090861 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:41.660048 1090861 api_server.go:166] Checking apiserver status ...
	I0318 13:17:41.660094 1090861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:41.680084 1090861 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:41.692325 1090861 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:41.692377 1090861 ssh_runner.go:195] Run: ls
	I0318 13:17:41.697889 1090861 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:41.703020 1090861 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:41.703062 1090861 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:41.703076 1090861 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:41.703098 1090861 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:41.703533 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.703594 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.721510 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0318 13:17:41.722131 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.723018 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.723053 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.723532 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.723762 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:41.726231 1090861 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:41.726254 1090861 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:41.726684 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.726762 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.743683 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0318 13:17:41.744325 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.744938 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.744969 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.745352 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.745576 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:41.748985 1090861 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:41.749459 1090861 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:41.749485 1090861 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:41.749939 1090861 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:41.750420 1090861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:41.750470 1090861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:41.768268 1090861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0318 13:17:41.768835 1090861 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:41.769507 1090861 main.go:141] libmachine: Using API Version  1
	I0318 13:17:41.769535 1090861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:41.769917 1090861 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:41.770113 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:41.770257 1090861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:41.770282 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:41.774528 1090861 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:41.775970 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:41.775972 1090861 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:41.776058 1090861 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:41.776300 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:41.776567 1090861 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:41.776791 1090861 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:41.867146 1090861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:41.886554 1090861 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 7 (665.287723ms)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:17:49.738630 1091430 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:17:49.738798 1091430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:49.738814 1091430 out.go:304] Setting ErrFile to fd 2...
	I0318 13:17:49.738821 1091430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:17:49.739493 1091430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:17:49.740885 1091430 out.go:298] Setting JSON to false
	I0318 13:17:49.740938 1091430 mustload.go:65] Loading cluster: ha-942957
	I0318 13:17:49.741121 1091430 notify.go:220] Checking for updates...
	I0318 13:17:49.741449 1091430 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:17:49.741473 1091430 status.go:255] checking status of ha-942957 ...
	I0318 13:17:49.741891 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:49.741967 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:49.761073 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0318 13:17:49.761593 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:49.762349 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:49.762382 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:49.762775 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:49.763045 1091430 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:17:49.764970 1091430 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:17:49.764992 1091430 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:49.765395 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:49.765449 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:49.781177 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0318 13:17:49.781624 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:49.782074 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:49.782095 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:49.782471 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:49.782676 1091430 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:17:49.785863 1091430 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:49.786294 1091430 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:49.786334 1091430 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:49.786517 1091430 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:17:49.786939 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:49.786986 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:49.802481 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0318 13:17:49.802997 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:49.803576 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:49.803603 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:49.804039 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:49.804262 1091430 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:17:49.804480 1091430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:49.804511 1091430 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:17:49.807645 1091430 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:49.808160 1091430 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:17:49.808193 1091430 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:17:49.808329 1091430 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:17:49.808521 1091430 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:17:49.808656 1091430 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:17:49.808781 1091430 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:17:49.892050 1091430 ssh_runner.go:195] Run: systemctl --version
	I0318 13:17:49.902023 1091430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:49.919468 1091430 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:49.919503 1091430 api_server.go:166] Checking apiserver status ...
	I0318 13:17:49.919541 1091430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:49.939693 1091430 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:17:49.954259 1091430 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:49.954325 1091430 ssh_runner.go:195] Run: ls
	I0318 13:17:49.959581 1091430 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:49.964548 1091430 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:49.964584 1091430 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:17:49.964601 1091430 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:49.964629 1091430 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:17:49.964953 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:49.965034 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:49.981536 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0318 13:17:49.981986 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:49.982483 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:49.982504 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:49.982840 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:49.983031 1091430 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:17:49.984609 1091430 status.go:330] ha-942957-m02 host status = "Stopped" (err=<nil>)
	I0318 13:17:49.984639 1091430 status.go:343] host is not running, skipping remaining checks
	I0318 13:17:49.984654 1091430 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:49.984677 1091430 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:17:49.984965 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:49.985003 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:50.001878 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0318 13:17:50.002388 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:50.002957 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:50.002989 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:50.003336 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:50.003536 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:17:50.005142 1091430 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:17:50.005163 1091430 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:50.005482 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:50.005535 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:50.020933 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0318 13:17:50.021413 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:50.021875 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:50.021901 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:50.022321 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:50.022521 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:17:50.025424 1091430 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:50.025834 1091430 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:50.025870 1091430 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:50.026077 1091430 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:17:50.026387 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:50.026439 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:50.042066 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I0318 13:17:50.042491 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:50.042954 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:50.042980 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:50.043313 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:50.043575 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:17:50.043771 1091430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:50.043796 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:17:50.047005 1091430 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:50.047451 1091430 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:17:50.047478 1091430 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:17:50.047645 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:17:50.047820 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:17:50.047967 1091430 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:17:50.048099 1091430 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:17:50.129705 1091430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:50.147768 1091430 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:17:50.147806 1091430 api_server.go:166] Checking apiserver status ...
	I0318 13:17:50.147881 1091430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:17:50.163237 1091430 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:17:50.174964 1091430 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:17:50.175041 1091430 ssh_runner.go:195] Run: ls
	I0318 13:17:50.179775 1091430 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:17:50.185142 1091430 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:17:50.185180 1091430 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:17:50.185191 1091430 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:17:50.185210 1091430 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:17:50.185626 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:50.185676 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:50.201370 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0318 13:17:50.201881 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:50.202515 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:50.202538 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:50.202897 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:50.203150 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:17:50.204894 1091430 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:17:50.204930 1091430 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:50.205246 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:50.205301 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:50.220736 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0318 13:17:50.221175 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:50.221679 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:50.221700 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:50.222080 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:50.222300 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:17:50.225273 1091430 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:50.225621 1091430 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:50.225644 1091430 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:50.225842 1091430 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:17:50.226175 1091430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:17:50.226219 1091430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:17:50.241606 1091430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0318 13:17:50.242063 1091430 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:17:50.242543 1091430 main.go:141] libmachine: Using API Version  1
	I0318 13:17:50.242566 1091430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:17:50.242905 1091430 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:17:50.243099 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:17:50.243286 1091430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:17:50.243307 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:17:50.246304 1091430 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:50.246770 1091430 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:17:50.246804 1091430 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:17:50.246931 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:17:50.247116 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:17:50.247276 1091430 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:17:50.247402 1091430 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:17:50.328111 1091430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:17:50.343031 1091430 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 7 (669.699474ms)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942957-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:18:02.031221 1091528 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:18:02.031352 1091528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:18:02.031361 1091528 out.go:304] Setting ErrFile to fd 2...
	I0318 13:18:02.031365 1091528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:18:02.031564 1091528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:18:02.031729 1091528 out.go:298] Setting JSON to false
	I0318 13:18:02.031771 1091528 mustload.go:65] Loading cluster: ha-942957
	I0318 13:18:02.031914 1091528 notify.go:220] Checking for updates...
	I0318 13:18:02.032250 1091528 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:18:02.032272 1091528 status.go:255] checking status of ha-942957 ...
	I0318 13:18:02.032738 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.032795 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.054890 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0318 13:18:02.055377 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.055991 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.056013 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.056438 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.056757 1091528 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:18:02.058712 1091528 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:18:02.058742 1091528 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:18:02.059097 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.059176 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.076071 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0318 13:18:02.076582 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.077202 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.077234 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.077695 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.077891 1091528 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:18:02.080929 1091528 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:18:02.081296 1091528 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:18:02.081334 1091528 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:18:02.081508 1091528 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:18:02.081820 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.081866 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.096944 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:18:02.097430 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.097895 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.097920 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.098283 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.098511 1091528 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:18:02.098731 1091528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:18:02.098768 1091528 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:18:02.101637 1091528 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:18:02.102186 1091528 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:18:02.102218 1091528 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:18:02.102334 1091528 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:18:02.102532 1091528 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:18:02.102709 1091528 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:18:02.102872 1091528 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:18:02.188417 1091528 ssh_runner.go:195] Run: systemctl --version
	I0318 13:18:02.195186 1091528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:18:02.215430 1091528 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:18:02.215468 1091528 api_server.go:166] Checking apiserver status ...
	I0318 13:18:02.215514 1091528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:18:02.233514 1091528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0318 13:18:02.245901 1091528 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:18:02.245967 1091528 ssh_runner.go:195] Run: ls
	I0318 13:18:02.250984 1091528 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:18:02.257484 1091528 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:18:02.257517 1091528 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:18:02.257531 1091528 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:18:02.257555 1091528 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:18:02.257856 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.257904 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.273230 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
	I0318 13:18:02.273728 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.274258 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.274281 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.274612 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.274787 1091528 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:18:02.276449 1091528 status.go:330] ha-942957-m02 host status = "Stopped" (err=<nil>)
	I0318 13:18:02.276468 1091528 status.go:343] host is not running, skipping remaining checks
	I0318 13:18:02.276476 1091528 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:18:02.276500 1091528 status.go:255] checking status of ha-942957-m03 ...
	I0318 13:18:02.276845 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.276898 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.291975 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42653
	I0318 13:18:02.292440 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.292972 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.293001 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.293348 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.293588 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:18:02.295290 1091528 status.go:330] ha-942957-m03 host status = "Running" (err=<nil>)
	I0318 13:18:02.295309 1091528 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:18:02.295748 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.295814 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.310699 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I0318 13:18:02.311152 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.311614 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.311637 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.311997 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.312223 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:18:02.314960 1091528 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:18:02.315457 1091528 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:18:02.315485 1091528 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:18:02.315598 1091528 host.go:66] Checking if "ha-942957-m03" exists ...
	I0318 13:18:02.315921 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.315958 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.331816 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I0318 13:18:02.332257 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.332700 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.332720 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.333082 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.333300 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:18:02.333500 1091528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:18:02.333522 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:18:02.336116 1091528 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:18:02.336569 1091528 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:18:02.336600 1091528 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:18:02.336765 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:18:02.336939 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:18:02.337107 1091528 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:18:02.337262 1091528 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:18:02.417531 1091528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:18:02.437633 1091528 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:18:02.437669 1091528 api_server.go:166] Checking apiserver status ...
	I0318 13:18:02.437712 1091528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:18:02.453136 1091528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 13:18:02.464669 1091528 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:18:02.464753 1091528 ssh_runner.go:195] Run: ls
	I0318 13:18:02.470052 1091528 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:18:02.475011 1091528 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:18:02.475039 1091528 status.go:422] ha-942957-m03 apiserver status = Running (err=<nil>)
	I0318 13:18:02.475049 1091528 status.go:257] ha-942957-m03 status: &{Name:ha-942957-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:18:02.475065 1091528 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:18:02.475351 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.475384 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.490762 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0318 13:18:02.491280 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.491905 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.491929 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.492301 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.492476 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:18:02.494189 1091528 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:18:02.494210 1091528 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:18:02.494512 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.494546 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.511314 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I0318 13:18:02.511815 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.512368 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.512394 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.512785 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.513013 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:18:02.516040 1091528 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:18:02.516710 1091528 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:18:02.516747 1091528 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:18:02.516890 1091528 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:18:02.517303 1091528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:02.517352 1091528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:02.533574 1091528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I0318 13:18:02.534006 1091528 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:02.534494 1091528 main.go:141] libmachine: Using API Version  1
	I0318 13:18:02.534528 1091528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:02.534838 1091528 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:02.535051 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:18:02.535265 1091528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:18:02.535290 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:18:02.537749 1091528 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:18:02.538179 1091528 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:18:02.538211 1091528 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:18:02.538336 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:18:02.538515 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:18:02.538683 1091528 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:18:02.538838 1091528 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:18:02.624919 1091528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:18:02.639498 1091528 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-942957 -n ha-942957
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-942957 logs -n 25: (1.545348539s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957:/home/docker/cp-test_ha-942957-m03_ha-942957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957 sudo cat                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957.txt                                |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m04 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp testdata/cp-test.txt                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957:/home/docker/cp-test_ha-942957-m04_ha-942957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957 sudo cat                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957.txt                                |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03:/home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m03 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-942957 node stop m02 -v=7                                                    | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-942957 node start m02 -v=7                                                   | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:09:51
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:09:51.591109 1085975 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:09:51.591242 1085975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:09:51.591251 1085975 out.go:304] Setting ErrFile to fd 2...
	I0318 13:09:51.591257 1085975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:09:51.591455 1085975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:09:51.592167 1085975 out.go:298] Setting JSON to false
	I0318 13:09:51.593152 1085975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17539,"bootTime":1710749853,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:09:51.593229 1085975 start.go:139] virtualization: kvm guest
	I0318 13:09:51.595884 1085975 out.go:177] * [ha-942957] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:09:51.597522 1085975 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:09:51.597591 1085975 notify.go:220] Checking for updates...
	I0318 13:09:51.599127 1085975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:09:51.600612 1085975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:09:51.602077 1085975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:09:51.603434 1085975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:09:51.604767 1085975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:09:51.606201 1085975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:09:51.642699 1085975 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 13:09:51.643964 1085975 start.go:297] selected driver: kvm2
	I0318 13:09:51.643991 1085975 start.go:901] validating driver "kvm2" against <nil>
	I0318 13:09:51.644007 1085975 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:09:51.645057 1085975 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:09:51.645143 1085975 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:09:51.660502 1085975 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:09:51.660552 1085975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:09:51.660762 1085975 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:09:51.660831 1085975 cni.go:84] Creating CNI manager for ""
	I0318 13:09:51.660847 1085975 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 13:09:51.660859 1085975 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 13:09:51.660923 1085975 start.go:340] cluster config:
	{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0318 13:09:51.661043 1085975 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:09:51.662888 1085975 out.go:177] * Starting "ha-942957" primary control-plane node in "ha-942957" cluster
	I0318 13:09:51.664159 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:09:51.664190 1085975 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:09:51.664197 1085975 cache.go:56] Caching tarball of preloaded images
	I0318 13:09:51.664270 1085975 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:09:51.664280 1085975 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:09:51.664570 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:09:51.664590 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json: {Name:mk01c7241d7a91ba57e1555d3781792f26b1c281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:51.664724 1085975 start.go:360] acquireMachinesLock for ha-942957: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:09:51.664754 1085975 start.go:364] duration metric: took 15.187µs to acquireMachinesLock for "ha-942957"
	I0318 13:09:51.664771 1085975 start.go:93] Provisioning new machine with config: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:09:51.664863 1085975 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 13:09:51.666661 1085975 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:09:51.666777 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:09:51.666818 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:09:51.681851 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0318 13:09:51.682396 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:09:51.682996 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:09:51.683028 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:09:51.683760 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:09:51.684245 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:09:51.684576 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:09:51.684958 1085975 start.go:159] libmachine.API.Create for "ha-942957" (driver="kvm2")
	I0318 13:09:51.684987 1085975 client.go:168] LocalClient.Create starting
	I0318 13:09:51.685052 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 13:09:51.685088 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:09:51.685103 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:09:51.685158 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 13:09:51.685176 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:09:51.685187 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:09:51.685202 1085975 main.go:141] libmachine: Running pre-create checks...
	I0318 13:09:51.685211 1085975 main.go:141] libmachine: (ha-942957) Calling .PreCreateCheck
	I0318 13:09:51.685617 1085975 main.go:141] libmachine: (ha-942957) Calling .GetConfigRaw
	I0318 13:09:51.686087 1085975 main.go:141] libmachine: Creating machine...
	I0318 13:09:51.686102 1085975 main.go:141] libmachine: (ha-942957) Calling .Create
	I0318 13:09:51.686253 1085975 main.go:141] libmachine: (ha-942957) Creating KVM machine...
	I0318 13:09:51.687635 1085975 main.go:141] libmachine: (ha-942957) DBG | found existing default KVM network
	I0318 13:09:51.688431 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:51.688268 1085998 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045f0}
	I0318 13:09:51.688450 1085975 main.go:141] libmachine: (ha-942957) DBG | created network xml: 
	I0318 13:09:51.688457 1085975 main.go:141] libmachine: (ha-942957) DBG | <network>
	I0318 13:09:51.688463 1085975 main.go:141] libmachine: (ha-942957) DBG |   <name>mk-ha-942957</name>
	I0318 13:09:51.688477 1085975 main.go:141] libmachine: (ha-942957) DBG |   <dns enable='no'/>
	I0318 13:09:51.688481 1085975 main.go:141] libmachine: (ha-942957) DBG |   
	I0318 13:09:51.688490 1085975 main.go:141] libmachine: (ha-942957) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 13:09:51.688495 1085975 main.go:141] libmachine: (ha-942957) DBG |     <dhcp>
	I0318 13:09:51.688504 1085975 main.go:141] libmachine: (ha-942957) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 13:09:51.688509 1085975 main.go:141] libmachine: (ha-942957) DBG |     </dhcp>
	I0318 13:09:51.688517 1085975 main.go:141] libmachine: (ha-942957) DBG |   </ip>
	I0318 13:09:51.688521 1085975 main.go:141] libmachine: (ha-942957) DBG |   
	I0318 13:09:51.688528 1085975 main.go:141] libmachine: (ha-942957) DBG | </network>
	I0318 13:09:51.688532 1085975 main.go:141] libmachine: (ha-942957) DBG | 
	I0318 13:09:51.693934 1085975 main.go:141] libmachine: (ha-942957) DBG | trying to create private KVM network mk-ha-942957 192.168.39.0/24...
	I0318 13:09:51.763790 1085975 main.go:141] libmachine: (ha-942957) DBG | private KVM network mk-ha-942957 192.168.39.0/24 created
	I0318 13:09:51.763931 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:51.763753 1085998 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:09:51.763989 1085975 main.go:141] libmachine: (ha-942957) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957 ...
	I0318 13:09:51.764008 1085975 main.go:141] libmachine: (ha-942957) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:09:51.764029 1085975 main.go:141] libmachine: (ha-942957) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:09:52.024720 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:52.024590 1085998 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa...
	I0318 13:09:52.144568 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:52.144429 1085998 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/ha-942957.rawdisk...
	I0318 13:09:52.144599 1085975 main.go:141] libmachine: (ha-942957) DBG | Writing magic tar header
	I0318 13:09:52.144609 1085975 main.go:141] libmachine: (ha-942957) DBG | Writing SSH key tar header
	I0318 13:09:52.144617 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:52.144545 1085998 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957 ...
	I0318 13:09:52.144735 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957
	I0318 13:09:52.144771 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 13:09:52.144786 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957 (perms=drwx------)
	I0318 13:09:52.144802 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:09:52.144809 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 13:09:52.144817 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 13:09:52.144824 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:09:52.144834 1085975 main.go:141] libmachine: (ha-942957) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:09:52.144841 1085975 main.go:141] libmachine: (ha-942957) Creating domain...
	I0318 13:09:52.144848 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:09:52.144860 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 13:09:52.144871 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:09:52.144886 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:09:52.144894 1085975 main.go:141] libmachine: (ha-942957) DBG | Checking permissions on dir: /home
	I0318 13:09:52.144907 1085975 main.go:141] libmachine: (ha-942957) DBG | Skipping /home - not owner
	I0318 13:09:52.145986 1085975 main.go:141] libmachine: (ha-942957) define libvirt domain using xml: 
	I0318 13:09:52.146006 1085975 main.go:141] libmachine: (ha-942957) <domain type='kvm'>
	I0318 13:09:52.146015 1085975 main.go:141] libmachine: (ha-942957)   <name>ha-942957</name>
	I0318 13:09:52.146023 1085975 main.go:141] libmachine: (ha-942957)   <memory unit='MiB'>2200</memory>
	I0318 13:09:52.146030 1085975 main.go:141] libmachine: (ha-942957)   <vcpu>2</vcpu>
	I0318 13:09:52.146036 1085975 main.go:141] libmachine: (ha-942957)   <features>
	I0318 13:09:52.146049 1085975 main.go:141] libmachine: (ha-942957)     <acpi/>
	I0318 13:09:52.146056 1085975 main.go:141] libmachine: (ha-942957)     <apic/>
	I0318 13:09:52.146067 1085975 main.go:141] libmachine: (ha-942957)     <pae/>
	I0318 13:09:52.146084 1085975 main.go:141] libmachine: (ha-942957)     
	I0318 13:09:52.146096 1085975 main.go:141] libmachine: (ha-942957)   </features>
	I0318 13:09:52.146106 1085975 main.go:141] libmachine: (ha-942957)   <cpu mode='host-passthrough'>
	I0318 13:09:52.146136 1085975 main.go:141] libmachine: (ha-942957)   
	I0318 13:09:52.146158 1085975 main.go:141] libmachine: (ha-942957)   </cpu>
	I0318 13:09:52.146164 1085975 main.go:141] libmachine: (ha-942957)   <os>
	I0318 13:09:52.146169 1085975 main.go:141] libmachine: (ha-942957)     <type>hvm</type>
	I0318 13:09:52.146178 1085975 main.go:141] libmachine: (ha-942957)     <boot dev='cdrom'/>
	I0318 13:09:52.146182 1085975 main.go:141] libmachine: (ha-942957)     <boot dev='hd'/>
	I0318 13:09:52.146187 1085975 main.go:141] libmachine: (ha-942957)     <bootmenu enable='no'/>
	I0318 13:09:52.146197 1085975 main.go:141] libmachine: (ha-942957)   </os>
	I0318 13:09:52.146202 1085975 main.go:141] libmachine: (ha-942957)   <devices>
	I0318 13:09:52.146216 1085975 main.go:141] libmachine: (ha-942957)     <disk type='file' device='cdrom'>
	I0318 13:09:52.146227 1085975 main.go:141] libmachine: (ha-942957)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/boot2docker.iso'/>
	I0318 13:09:52.146235 1085975 main.go:141] libmachine: (ha-942957)       <target dev='hdc' bus='scsi'/>
	I0318 13:09:52.146240 1085975 main.go:141] libmachine: (ha-942957)       <readonly/>
	I0318 13:09:52.146246 1085975 main.go:141] libmachine: (ha-942957)     </disk>
	I0318 13:09:52.146252 1085975 main.go:141] libmachine: (ha-942957)     <disk type='file' device='disk'>
	I0318 13:09:52.146260 1085975 main.go:141] libmachine: (ha-942957)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:09:52.146269 1085975 main.go:141] libmachine: (ha-942957)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/ha-942957.rawdisk'/>
	I0318 13:09:52.146277 1085975 main.go:141] libmachine: (ha-942957)       <target dev='hda' bus='virtio'/>
	I0318 13:09:52.146282 1085975 main.go:141] libmachine: (ha-942957)     </disk>
	I0318 13:09:52.146289 1085975 main.go:141] libmachine: (ha-942957)     <interface type='network'>
	I0318 13:09:52.146351 1085975 main.go:141] libmachine: (ha-942957)       <source network='mk-ha-942957'/>
	I0318 13:09:52.146384 1085975 main.go:141] libmachine: (ha-942957)       <model type='virtio'/>
	I0318 13:09:52.146397 1085975 main.go:141] libmachine: (ha-942957)     </interface>
	I0318 13:09:52.146407 1085975 main.go:141] libmachine: (ha-942957)     <interface type='network'>
	I0318 13:09:52.146421 1085975 main.go:141] libmachine: (ha-942957)       <source network='default'/>
	I0318 13:09:52.146433 1085975 main.go:141] libmachine: (ha-942957)       <model type='virtio'/>
	I0318 13:09:52.146447 1085975 main.go:141] libmachine: (ha-942957)     </interface>
	I0318 13:09:52.146458 1085975 main.go:141] libmachine: (ha-942957)     <serial type='pty'>
	I0318 13:09:52.146470 1085975 main.go:141] libmachine: (ha-942957)       <target port='0'/>
	I0318 13:09:52.146483 1085975 main.go:141] libmachine: (ha-942957)     </serial>
	I0318 13:09:52.146500 1085975 main.go:141] libmachine: (ha-942957)     <console type='pty'>
	I0318 13:09:52.146522 1085975 main.go:141] libmachine: (ha-942957)       <target type='serial' port='0'/>
	I0318 13:09:52.146546 1085975 main.go:141] libmachine: (ha-942957)     </console>
	I0318 13:09:52.146567 1085975 main.go:141] libmachine: (ha-942957)     <rng model='virtio'>
	I0318 13:09:52.146580 1085975 main.go:141] libmachine: (ha-942957)       <backend model='random'>/dev/random</backend>
	I0318 13:09:52.146591 1085975 main.go:141] libmachine: (ha-942957)     </rng>
	I0318 13:09:52.146601 1085975 main.go:141] libmachine: (ha-942957)     
	I0318 13:09:52.146608 1085975 main.go:141] libmachine: (ha-942957)     
	I0318 13:09:52.146617 1085975 main.go:141] libmachine: (ha-942957)   </devices>
	I0318 13:09:52.146627 1085975 main.go:141] libmachine: (ha-942957) </domain>
	I0318 13:09:52.146640 1085975 main.go:141] libmachine: (ha-942957) 
	I0318 13:09:52.151732 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:90:91:5f in network default
	I0318 13:09:52.152297 1085975 main.go:141] libmachine: (ha-942957) Ensuring networks are active...
	I0318 13:09:52.152314 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:52.153019 1085975 main.go:141] libmachine: (ha-942957) Ensuring network default is active
	I0318 13:09:52.153290 1085975 main.go:141] libmachine: (ha-942957) Ensuring network mk-ha-942957 is active
	I0318 13:09:52.153733 1085975 main.go:141] libmachine: (ha-942957) Getting domain xml...
	I0318 13:09:52.154447 1085975 main.go:141] libmachine: (ha-942957) Creating domain...
	I0318 13:09:53.344377 1085975 main.go:141] libmachine: (ha-942957) Waiting to get IP...
	I0318 13:09:53.346049 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:53.346865 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:53.346896 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:53.346826 1085998 retry.go:31] will retry after 210.081713ms: waiting for machine to come up
	I0318 13:09:53.558182 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:53.558686 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:53.558710 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:53.558664 1085998 retry.go:31] will retry after 330.740738ms: waiting for machine to come up
	I0318 13:09:53.891328 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:53.891798 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:53.891842 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:53.891735 1085998 retry.go:31] will retry after 436.977306ms: waiting for machine to come up
	I0318 13:09:54.330358 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:54.330771 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:54.330797 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:54.330717 1085998 retry.go:31] will retry after 370.224263ms: waiting for machine to come up
	I0318 13:09:54.702089 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:54.702599 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:54.702641 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:54.702532 1085998 retry.go:31] will retry after 678.316266ms: waiting for machine to come up
	I0318 13:09:55.382306 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:55.382740 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:55.382772 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:55.382662 1085998 retry.go:31] will retry after 772.577483ms: waiting for machine to come up
	I0318 13:09:56.156783 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:56.157216 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:56.157269 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:56.157158 1085998 retry.go:31] will retry after 1.180847447s: waiting for machine to come up
	I0318 13:09:57.339108 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:57.339478 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:57.339538 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:57.339454 1085998 retry.go:31] will retry after 1.39126661s: waiting for machine to come up
	I0318 13:09:58.733271 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:58.733673 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:58.733716 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:58.733639 1085998 retry.go:31] will retry after 1.249593638s: waiting for machine to come up
	I0318 13:09:59.985269 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:09:59.985791 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:09:59.985823 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:09:59.985742 1085998 retry.go:31] will retry after 1.97751072s: waiting for machine to come up
	I0318 13:10:01.964811 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:01.965279 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:01.965301 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:01.965227 1085998 retry.go:31] will retry after 1.797342776s: waiting for machine to come up
	I0318 13:10:03.765063 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:03.765536 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:03.765597 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:03.765465 1085998 retry.go:31] will retry after 3.163723566s: waiting for machine to come up
	I0318 13:10:06.931547 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:06.932156 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:06.932189 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:06.932085 1085998 retry.go:31] will retry after 2.911804479s: waiting for machine to come up
	I0318 13:10:09.847125 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:09.847512 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find current IP address of domain ha-942957 in network mk-ha-942957
	I0318 13:10:09.847532 1085975 main.go:141] libmachine: (ha-942957) DBG | I0318 13:10:09.847477 1085998 retry.go:31] will retry after 5.499705405s: waiting for machine to come up
	I0318 13:10:15.351123 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.351573 1085975 main.go:141] libmachine: (ha-942957) Found IP for machine: 192.168.39.68
	I0318 13:10:15.351607 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has current primary IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.351617 1085975 main.go:141] libmachine: (ha-942957) Reserving static IP address...
	I0318 13:10:15.352085 1085975 main.go:141] libmachine: (ha-942957) DBG | unable to find host DHCP lease matching {name: "ha-942957", mac: "52:54:00:1a:d5:73", ip: "192.168.39.68"} in network mk-ha-942957
	I0318 13:10:15.427818 1085975 main.go:141] libmachine: (ha-942957) DBG | Getting to WaitForSSH function...
	I0318 13:10:15.427866 1085975 main.go:141] libmachine: (ha-942957) Reserved static IP address: 192.168.39.68
	I0318 13:10:15.427878 1085975 main.go:141] libmachine: (ha-942957) Waiting for SSH to be available...
	I0318 13:10:15.430906 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.431337 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.431373 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.431505 1085975 main.go:141] libmachine: (ha-942957) DBG | Using SSH client type: external
	I0318 13:10:15.431583 1085975 main.go:141] libmachine: (ha-942957) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa (-rw-------)
	I0318 13:10:15.431627 1085975 main.go:141] libmachine: (ha-942957) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:10:15.431646 1085975 main.go:141] libmachine: (ha-942957) DBG | About to run SSH command:
	I0318 13:10:15.431661 1085975 main.go:141] libmachine: (ha-942957) DBG | exit 0
	I0318 13:10:15.556263 1085975 main.go:141] libmachine: (ha-942957) DBG | SSH cmd err, output: <nil>: 
	I0318 13:10:15.556562 1085975 main.go:141] libmachine: (ha-942957) KVM machine creation complete!
	I0318 13:10:15.556889 1085975 main.go:141] libmachine: (ha-942957) Calling .GetConfigRaw
	I0318 13:10:15.557412 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:15.557611 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:15.557753 1085975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:10:15.557764 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:15.559252 1085975 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:10:15.559269 1085975 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:10:15.559275 1085975 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:10:15.559282 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.561521 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.561879 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.561912 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.562022 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.562203 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.562361 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.562460 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.562619 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.562881 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.562895 1085975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:10:15.667295 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:10:15.667317 1085975 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:10:15.667330 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.670424 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.670860 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.670893 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.671059 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.671284 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.671467 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.671655 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.671878 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.672126 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.672141 1085975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:10:15.776890 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:10:15.776995 1085975 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:10:15.777014 1085975 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:10:15.777025 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:10:15.777319 1085975 buildroot.go:166] provisioning hostname "ha-942957"
	I0318 13:10:15.777349 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:10:15.777553 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.780483 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.780824 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.780858 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.780963 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.781160 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.781345 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.781512 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.781680 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.781853 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.781864 1085975 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957 && echo "ha-942957" | sudo tee /etc/hostname
	I0318 13:10:15.897918 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957
	
	I0318 13:10:15.897947 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:15.900609 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.900915 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:15.900945 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:15.901114 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:15.901324 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.901479 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:15.901606 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:15.901755 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:15.901934 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:15.901957 1085975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:10:16.014910 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:10:16.014952 1085975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:10:16.014982 1085975 buildroot.go:174] setting up certificates
	I0318 13:10:16.014996 1085975 provision.go:84] configureAuth start
	I0318 13:10:16.015010 1085975 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:10:16.015393 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:16.018070 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.018424 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.018472 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.018569 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.020928 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.021259 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.021295 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.021432 1085975 provision.go:143] copyHostCerts
	I0318 13:10:16.021487 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:10:16.021547 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:10:16.021560 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:10:16.021642 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:10:16.021756 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:10:16.021791 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:10:16.021802 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:10:16.021848 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:10:16.021924 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:10:16.021949 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:10:16.021957 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:10:16.021983 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:10:16.022036 1085975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957 san=[127.0.0.1 192.168.39.68 ha-942957 localhost minikube]
	I0318 13:10:16.090965 1085975 provision.go:177] copyRemoteCerts
	I0318 13:10:16.091041 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:10:16.091071 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.093832 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.094206 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.094234 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.094396 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.094588 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.094740 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.094909 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.179035 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:10:16.179122 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:10:16.206260 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:10:16.206343 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 13:10:16.232805 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:10:16.232898 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:10:16.258882 1085975 provision.go:87] duration metric: took 243.867806ms to configureAuth
	I0318 13:10:16.258920 1085975 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:10:16.259106 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:10:16.259257 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.262345 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.262703 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.262738 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.262890 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.263145 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.263332 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.263479 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.263651 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:16.263898 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:16.263918 1085975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:10:16.540112 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:10:16.540145 1085975 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:10:16.540181 1085975 main.go:141] libmachine: (ha-942957) Calling .GetURL
	I0318 13:10:16.541605 1085975 main.go:141] libmachine: (ha-942957) DBG | Using libvirt version 6000000
	I0318 13:10:16.544127 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.544447 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.544474 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.544655 1085975 main.go:141] libmachine: Docker is up and running!
	I0318 13:10:16.544668 1085975 main.go:141] libmachine: Reticulating splines...
	I0318 13:10:16.544676 1085975 client.go:171] duration metric: took 24.859680847s to LocalClient.Create
	I0318 13:10:16.544705 1085975 start.go:167] duration metric: took 24.859747601s to libmachine.API.Create "ha-942957"
	I0318 13:10:16.544718 1085975 start.go:293] postStartSetup for "ha-942957" (driver="kvm2")
	I0318 13:10:16.544760 1085975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:10:16.544782 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.545087 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:10:16.545117 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.547499 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.547781 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.547811 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.547974 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.548212 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.548393 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.548565 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.630530 1085975 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:10:16.635219 1085975 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:10:16.635249 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:10:16.635318 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:10:16.635403 1085975 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:10:16.635418 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:10:16.635513 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:10:16.645356 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:10:16.671535 1085975 start.go:296] duration metric: took 126.799398ms for postStartSetup
	I0318 13:10:16.671605 1085975 main.go:141] libmachine: (ha-942957) Calling .GetConfigRaw
	I0318 13:10:16.672222 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:16.674659 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.674958 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.674984 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.675200 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:10:16.675373 1085975 start.go:128] duration metric: took 25.010499122s to createHost
	I0318 13:10:16.675396 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.677648 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.677985 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.678014 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.678119 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.678314 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.678480 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.678660 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.678885 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:10:16.679183 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:10:16.679217 1085975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:10:16.780941 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767416.765636505
	
	I0318 13:10:16.780974 1085975 fix.go:216] guest clock: 1710767416.765636505
	I0318 13:10:16.780982 1085975 fix.go:229] Guest: 2024-03-18 13:10:16.765636505 +0000 UTC Remote: 2024-03-18 13:10:16.67538499 +0000 UTC m=+25.134263651 (delta=90.251515ms)
	I0318 13:10:16.781023 1085975 fix.go:200] guest clock delta is within tolerance: 90.251515ms
	I0318 13:10:16.781029 1085975 start.go:83] releasing machines lock for "ha-942957", held for 25.116266785s
	I0318 13:10:16.781055 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.781369 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:16.784280 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.784707 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.784741 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.784890 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.785435 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.785650 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:16.785736 1085975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:10:16.785792 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.785912 1085975 ssh_runner.go:195] Run: cat /version.json
	I0318 13:10:16.785936 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:16.788384 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.788745 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.788773 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.788790 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.788912 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.789118 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.789225 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:16.789254 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:16.789278 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.789565 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:16.789553 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.789720 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:16.789875 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:16.790034 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:16.865129 1085975 ssh_runner.go:195] Run: systemctl --version
	I0318 13:10:16.892786 1085975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:10:17.060087 1085975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:10:17.066212 1085975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:10:17.066283 1085975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:10:17.082827 1085975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:10:17.082856 1085975 start.go:494] detecting cgroup driver to use...
	I0318 13:10:17.082932 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:10:17.099560 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:10:17.114461 1085975 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:10:17.114541 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:10:17.129682 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:10:17.144424 1085975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:10:17.260772 1085975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:10:17.396399 1085975 docker.go:233] disabling docker service ...
	I0318 13:10:17.396474 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:10:17.412052 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:10:17.426062 1085975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:10:17.565994 1085975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:10:17.682678 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:10:17.698151 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:10:17.718408 1085975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:10:17.718470 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.730543 1085975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:10:17.730628 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.742758 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.754592 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:10:17.766316 1085975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:10:17.778421 1085975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:10:17.788956 1085975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:10:17.789016 1085975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:10:17.802605 1085975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:10:17.813511 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:10:17.924526 1085975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:10:18.062906 1085975 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:10:18.062988 1085975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:10:18.068672 1085975 start.go:562] Will wait 60s for crictl version
	I0318 13:10:18.068743 1085975 ssh_runner.go:195] Run: which crictl
	I0318 13:10:18.073084 1085975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:10:18.110237 1085975 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:10:18.110330 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:10:18.140748 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:10:18.173240 1085975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:10:18.174730 1085975 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:10:18.177629 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:18.178081 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:18.178108 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:18.178340 1085975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:10:18.183051 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:10:18.199520 1085975 kubeadm.go:877] updating cluster {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:10:18.199651 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:10:18.199707 1085975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:10:18.242783 1085975 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:10:18.242861 1085975 ssh_runner.go:195] Run: which lz4
	I0318 13:10:18.247684 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 13:10:18.247812 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:10:18.252522 1085975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:10:18.252569 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:10:19.977725 1085975 crio.go:444] duration metric: took 1.729948171s to copy over tarball
	I0318 13:10:19.977806 1085975 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:10:22.364382 1085975 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.386530945s)
	I0318 13:10:22.364430 1085975 crio.go:451] duration metric: took 2.38667205s to extract the tarball
	I0318 13:10:22.364441 1085975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:10:22.406482 1085975 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:10:22.457704 1085975 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:10:22.457732 1085975 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:10:22.457743 1085975 kubeadm.go:928] updating node { 192.168.39.68 8443 v1.28.4 crio true true} ...
	I0318 13:10:22.457898 1085975 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:10:22.457986 1085975 ssh_runner.go:195] Run: crio config
	I0318 13:10:22.513985 1085975 cni.go:84] Creating CNI manager for ""
	I0318 13:10:22.514013 1085975 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:10:22.514027 1085975 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:10:22.514057 1085975 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-942957 NodeName:ha-942957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:10:22.514240 1085975 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-942957"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:10:22.514272 1085975 kube-vip.go:111] generating kube-vip config ...
	I0318 13:10:22.514327 1085975 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:10:22.533171 1085975 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:10:22.533314 1085975 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:10:22.533385 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:10:22.544052 1085975 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:10:22.544148 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 13:10:22.554787 1085975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 13:10:22.574408 1085975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:10:22.593107 1085975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 13:10:22.612295 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:10:22.631469 1085975 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:10:22.635602 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:10:22.648752 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:10:22.772280 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:10:22.798920 1085975 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.68
	I0318 13:10:22.798946 1085975 certs.go:194] generating shared ca certs ...
	I0318 13:10:22.798964 1085975 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:22.799142 1085975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:10:22.799225 1085975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:10:22.799238 1085975 certs.go:256] generating profile certs ...
	I0318 13:10:22.799314 1085975 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:10:22.799331 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt with IP's: []
	I0318 13:10:22.984629 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt ...
	I0318 13:10:22.984664 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt: {Name:mk72770fd094ac57b7f08b92822bfa33014aa130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:22.984854 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key ...
	I0318 13:10:22.984880 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key: {Name:mk92717c7fc69d31773f4ece55bb512c38949d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:22.984966 1085975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926
	I0318 13:10:22.984981 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.254]
	I0318 13:10:23.092142 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926 ...
	I0318 13:10:23.092179 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926: {Name:mkd040c2f6dabb7f5d21f0d07a1359550af09051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.092351 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926 ...
	I0318 13:10:23.092364 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926: {Name:mk754980ae12a2603c5698ed6a63aa3a63976015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.092439 1085975 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.4a593926 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:10:23.092512 1085975 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.4a593926 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:10:23.092563 1085975 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:10:23.092577 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt with IP's: []
	I0318 13:10:23.176564 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt ...
	I0318 13:10:23.176602 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt: {Name:mka5b3142058f0d61261c04d9ec811971eddfbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.176764 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key ...
	I0318 13:10:23.176775 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key: {Name:mk6ffe02690f2bea5be214320ff8071a59348b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:23.176840 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:10:23.176858 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:10:23.176868 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:10:23.176878 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:10:23.176889 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:10:23.176902 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:10:23.176912 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:10:23.176921 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:10:23.176971 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:10:23.177004 1085975 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:10:23.177013 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:10:23.177032 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:10:23.177054 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:10:23.177074 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:10:23.177109 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:10:23.177157 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.177192 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.177204 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.177843 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:10:23.206273 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:10:23.234050 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:10:23.260561 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:10:23.287344 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:10:23.313475 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:10:23.339480 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:10:23.366812 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:10:23.392858 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:10:23.419475 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:10:23.446493 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:10:23.473492 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:10:23.490650 1085975 ssh_runner.go:195] Run: openssl version
	I0318 13:10:23.496760 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:10:23.507582 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.512387 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.512466 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:10:23.518441 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:10:23.529033 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:10:23.539985 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.544610 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.544678 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:10:23.550673 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:10:23.565931 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:10:23.582080 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.588687 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.588756 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:10:23.596134 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:10:23.612992 1085975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:10:23.617614 1085975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:10:23.617705 1085975 kubeadm.go:391] StartCluster: {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:10:23.617827 1085975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:10:23.617891 1085975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:10:23.662686 1085975 cri.go:89] found id: ""
	I0318 13:10:23.662809 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 13:10:23.673191 1085975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:10:23.684085 1085975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:10:23.694399 1085975 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:10:23.694419 1085975 kubeadm.go:156] found existing configuration files:
	
	I0318 13:10:23.694463 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:10:23.703573 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:10:23.703632 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:10:23.714152 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:10:23.723161 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:10:23.723210 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:10:23.732323 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:10:23.741268 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:10:23.741326 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:10:23.750261 1085975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:10:23.761140 1085975 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:10:23.761207 1085975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:10:23.771686 1085975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:10:24.018539 1085975 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:10:34.924636 1085975 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:10:34.924717 1085975 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:10:34.924809 1085975 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:10:34.924952 1085975 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:10:34.925086 1085975 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:10:34.925176 1085975 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:10:34.926965 1085975 out.go:204]   - Generating certificates and keys ...
	I0318 13:10:34.927064 1085975 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:10:34.927142 1085975 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:10:34.927220 1085975 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 13:10:34.927301 1085975 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 13:10:34.927392 1085975 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 13:10:34.927467 1085975 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 13:10:34.927548 1085975 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 13:10:34.927700 1085975 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-942957 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0318 13:10:34.927785 1085975 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 13:10:34.927959 1085975 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-942957 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0318 13:10:34.928052 1085975 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 13:10:34.928141 1085975 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 13:10:34.928220 1085975 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 13:10:34.928307 1085975 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:10:34.928371 1085975 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:10:34.928439 1085975 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:10:34.928517 1085975 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:10:34.928595 1085975 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:10:34.928698 1085975 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:10:34.928791 1085975 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:10:34.930425 1085975 out.go:204]   - Booting up control plane ...
	I0318 13:10:34.930555 1085975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:10:34.930641 1085975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:10:34.930702 1085975 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:10:34.930799 1085975 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:10:34.930887 1085975 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:10:34.930929 1085975 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:10:34.931053 1085975 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:10:34.931118 1085975 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.625794 seconds
	I0318 13:10:34.931213 1085975 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:10:34.931326 1085975 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:10:34.931376 1085975 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:10:34.931569 1085975 kubeadm.go:309] [mark-control-plane] Marking the node ha-942957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:10:34.931650 1085975 kubeadm.go:309] [bootstrap-token] Using token: bc0gmg.0whp06jnjk6h7olc
	I0318 13:10:34.933085 1085975 out.go:204]   - Configuring RBAC rules ...
	I0318 13:10:34.933228 1085975 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:10:34.933307 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:10:34.933482 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:10:34.933678 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:10:34.933804 1085975 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:10:34.933880 1085975 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:10:34.933989 1085975 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:10:34.934052 1085975 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:10:34.934135 1085975 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:10:34.934144 1085975 kubeadm.go:309] 
	I0318 13:10:34.934228 1085975 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:10:34.934248 1085975 kubeadm.go:309] 
	I0318 13:10:34.934342 1085975 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:10:34.934351 1085975 kubeadm.go:309] 
	I0318 13:10:34.934382 1085975 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:10:34.934466 1085975 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:10:34.934540 1085975 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:10:34.934550 1085975 kubeadm.go:309] 
	I0318 13:10:34.934634 1085975 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:10:34.934649 1085975 kubeadm.go:309] 
	I0318 13:10:34.934713 1085975 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:10:34.934721 1085975 kubeadm.go:309] 
	I0318 13:10:34.934771 1085975 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:10:34.934844 1085975 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:10:34.934963 1085975 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:10:34.934981 1085975 kubeadm.go:309] 
	I0318 13:10:34.935085 1085975 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:10:34.935186 1085975 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:10:34.935195 1085975 kubeadm.go:309] 
	I0318 13:10:34.935304 1085975 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bc0gmg.0whp06jnjk6h7olc \
	I0318 13:10:34.935432 1085975 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 13:10:34.935463 1085975 kubeadm.go:309] 	--control-plane 
	I0318 13:10:34.935473 1085975 kubeadm.go:309] 
	I0318 13:10:34.935580 1085975 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:10:34.935608 1085975 kubeadm.go:309] 
	I0318 13:10:34.935712 1085975 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bc0gmg.0whp06jnjk6h7olc \
	I0318 13:10:34.935813 1085975 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 13:10:34.935830 1085975 cni.go:84] Creating CNI manager for ""
	I0318 13:10:34.935837 1085975 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 13:10:34.937520 1085975 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 13:10:34.939260 1085975 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 13:10:34.960039 1085975 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 13:10:34.960065 1085975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 13:10:34.990411 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 13:10:36.003170 1085975 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.012709402s)
	I0318 13:10:36.003232 1085975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:10:36.003350 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:36.003355 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-942957 minikube.k8s.io/updated_at=2024_03_18T13_10_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=ha-942957 minikube.k8s.io/primary=true
	I0318 13:10:36.023685 1085975 ops.go:34] apiserver oom_adj: -16
	I0318 13:10:36.202362 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:36.703150 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:37.203113 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:37.703030 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:38.203008 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:38.702557 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:39.203107 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:39.703141 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:40.203292 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:40.703258 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:41.202454 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:41.703077 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:42.203366 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:42.702766 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:43.202571 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:43.702456 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:44.203352 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:44.702541 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:45.202497 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:45.703278 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:46.202912 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:46.702932 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:47.202576 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:47.702392 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:48.203053 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:10:48.354627 1085975 kubeadm.go:1107] duration metric: took 12.351352858s to wait for elevateKubeSystemPrivileges
	W0318 13:10:48.354673 1085975 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:10:48.354683 1085975 kubeadm.go:393] duration metric: took 24.736991777s to StartCluster
	I0318 13:10:48.354709 1085975 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:48.354797 1085975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:10:48.355897 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:10:48.356178 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 13:10:48.356214 1085975 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:10:48.356246 1085975 start.go:240] waiting for startup goroutines ...
	I0318 13:10:48.356261 1085975 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:10:48.356324 1085975 addons.go:69] Setting storage-provisioner=true in profile "ha-942957"
	I0318 13:10:48.356339 1085975 addons.go:69] Setting default-storageclass=true in profile "ha-942957"
	I0318 13:10:48.356360 1085975 addons.go:234] Setting addon storage-provisioner=true in "ha-942957"
	I0318 13:10:48.356378 1085975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-942957"
	I0318 13:10:48.356392 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:10:48.356484 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:10:48.356836 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.356846 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.356872 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.356873 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.372994 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42403
	I0318 13:10:48.373360 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I0318 13:10:48.373518 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.373798 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.374111 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.374137 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.374351 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.374379 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.374484 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.374749 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:48.374779 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.375272 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.375297 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.377322 1085975 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:10:48.377630 1085975 kapi.go:59] client config for ha-942957: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:10:48.378136 1085975 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 13:10:48.378345 1085975 addons.go:234] Setting addon default-storageclass=true in "ha-942957"
	I0318 13:10:48.378390 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:10:48.378655 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.378687 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.391816 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0318 13:10:48.392348 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.392960 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.392988 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.393323 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.393517 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:48.394441 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0318 13:10:48.394885 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.395409 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.395427 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.395449 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:48.397759 1085975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:10:48.395842 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.399386 1085975 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:10:48.399405 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:10:48.399427 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:48.399996 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:48.400062 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:48.402534 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.402992 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:48.403022 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.403163 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:48.403412 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:48.403601 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:48.403799 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:48.416542 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I0318 13:10:48.417035 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:48.417591 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:48.417620 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:48.417994 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:48.418255 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:10:48.420156 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:10:48.420462 1085975 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:10:48.420478 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:10:48.420496 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:10:48.423448 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.423931 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:10:48.424000 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:10:48.424193 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:10:48.424828 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:10:48.425071 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:10:48.425281 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:10:48.520709 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 13:10:48.528875 1085975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:10:48.595369 1085975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:10:49.206587 1085975 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 13:10:49.401036 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401069 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401163 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401192 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401466 1085975 main.go:141] libmachine: (ha-942957) DBG | Closing plugin on server side
	I0318 13:10:49.401513 1085975 main.go:141] libmachine: (ha-942957) DBG | Closing plugin on server side
	I0318 13:10:49.401546 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401549 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401564 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.401567 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.401579 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401596 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401610 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.401623 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.401849 1085975 main.go:141] libmachine: (ha-942957) DBG | Closing plugin on server side
	I0318 13:10:49.401866 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401876 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.401878 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.401892 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.402014 1085975 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 13:10:49.402038 1085975 round_trippers.go:469] Request Headers:
	I0318 13:10:49.402048 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.402054 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:10:49.415900 1085975 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 13:10:49.416844 1085975 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 13:10:49.416868 1085975 round_trippers.go:469] Request Headers:
	I0318 13:10:49.416879 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.416886 1085975 round_trippers.go:473]     Content-Type: application/json
	I0318 13:10:49.416899 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:10:49.420848 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:49.421029 1085975 main.go:141] libmachine: Making call to close driver server
	I0318 13:10:49.421044 1085975 main.go:141] libmachine: (ha-942957) Calling .Close
	I0318 13:10:49.421363 1085975 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:10:49.421401 1085975 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:10:49.423324 1085975 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 13:10:49.424600 1085975 addons.go:505] duration metric: took 1.06833728s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 13:10:49.424640 1085975 start.go:245] waiting for cluster config update ...
	I0318 13:10:49.424667 1085975 start.go:254] writing updated cluster config ...
	I0318 13:10:49.426314 1085975 out.go:177] 
	I0318 13:10:49.427664 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:10:49.427746 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:10:49.429489 1085975 out.go:177] * Starting "ha-942957-m02" control-plane node in "ha-942957" cluster
	I0318 13:10:49.431012 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:10:49.431043 1085975 cache.go:56] Caching tarball of preloaded images
	I0318 13:10:49.431145 1085975 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:10:49.431168 1085975 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:10:49.431256 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:10:49.431470 1085975 start.go:360] acquireMachinesLock for ha-942957-m02: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:10:49.431536 1085975 start.go:364] duration metric: took 41.802µs to acquireMachinesLock for "ha-942957-m02"
	I0318 13:10:49.431561 1085975 start.go:93] Provisioning new machine with config: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:10:49.431633 1085975 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0318 13:10:49.433368 1085975 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:10:49.433456 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:10:49.433489 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:10:49.448574 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I0318 13:10:49.449008 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:10:49.449473 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:10:49.449496 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:10:49.449865 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:10:49.450041 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:10:49.450164 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:10:49.450326 1085975 start.go:159] libmachine.API.Create for "ha-942957" (driver="kvm2")
	I0318 13:10:49.450350 1085975 client.go:168] LocalClient.Create starting
	I0318 13:10:49.450391 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 13:10:49.450437 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:10:49.450453 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:10:49.450510 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 13:10:49.450529 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:10:49.450537 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:10:49.450553 1085975 main.go:141] libmachine: Running pre-create checks...
	I0318 13:10:49.450561 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .PreCreateCheck
	I0318 13:10:49.450724 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetConfigRaw
	I0318 13:10:49.451100 1085975 main.go:141] libmachine: Creating machine...
	I0318 13:10:49.451114 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .Create
	I0318 13:10:49.451293 1085975 main.go:141] libmachine: (ha-942957-m02) Creating KVM machine...
	I0318 13:10:49.452592 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found existing default KVM network
	I0318 13:10:49.452706 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found existing private KVM network mk-ha-942957
	I0318 13:10:49.452886 1085975 main.go:141] libmachine: (ha-942957-m02) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02 ...
	I0318 13:10:49.452910 1085975 main.go:141] libmachine: (ha-942957-m02) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:10:49.452978 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.452877 1086314 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:10:49.453055 1085975 main.go:141] libmachine: (ha-942957-m02) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:10:49.729200 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.729032 1086314 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa...
	I0318 13:10:49.888681 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.888533 1086314 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/ha-942957-m02.rawdisk...
	I0318 13:10:49.888717 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Writing magic tar header
	I0318 13:10:49.888730 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Writing SSH key tar header
	I0318 13:10:49.888743 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:49.888673 1086314 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02 ...
	I0318 13:10:49.888875 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02
	I0318 13:10:49.888903 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02 (perms=drwx------)
	I0318 13:10:49.888914 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 13:10:49.888931 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:10:49.888944 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 13:10:49.888956 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:10:49.888966 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:10:49.888996 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Checking permissions on dir: /home
	I0318 13:10:49.889012 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Skipping /home - not owner
	I0318 13:10:49.889020 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:10:49.889032 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 13:10:49.889045 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 13:10:49.889061 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:10:49.889074 1085975 main.go:141] libmachine: (ha-942957-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:10:49.889086 1085975 main.go:141] libmachine: (ha-942957-m02) Creating domain...
	I0318 13:10:49.889944 1085975 main.go:141] libmachine: (ha-942957-m02) define libvirt domain using xml: 
	I0318 13:10:49.889968 1085975 main.go:141] libmachine: (ha-942957-m02) <domain type='kvm'>
	I0318 13:10:49.889979 1085975 main.go:141] libmachine: (ha-942957-m02)   <name>ha-942957-m02</name>
	I0318 13:10:49.889986 1085975 main.go:141] libmachine: (ha-942957-m02)   <memory unit='MiB'>2200</memory>
	I0318 13:10:49.889994 1085975 main.go:141] libmachine: (ha-942957-m02)   <vcpu>2</vcpu>
	I0318 13:10:49.890000 1085975 main.go:141] libmachine: (ha-942957-m02)   <features>
	I0318 13:10:49.890008 1085975 main.go:141] libmachine: (ha-942957-m02)     <acpi/>
	I0318 13:10:49.890015 1085975 main.go:141] libmachine: (ha-942957-m02)     <apic/>
	I0318 13:10:49.890023 1085975 main.go:141] libmachine: (ha-942957-m02)     <pae/>
	I0318 13:10:49.890031 1085975 main.go:141] libmachine: (ha-942957-m02)     
	I0318 13:10:49.890065 1085975 main.go:141] libmachine: (ha-942957-m02)   </features>
	I0318 13:10:49.890102 1085975 main.go:141] libmachine: (ha-942957-m02)   <cpu mode='host-passthrough'>
	I0318 13:10:49.890115 1085975 main.go:141] libmachine: (ha-942957-m02)   
	I0318 13:10:49.890122 1085975 main.go:141] libmachine: (ha-942957-m02)   </cpu>
	I0318 13:10:49.890151 1085975 main.go:141] libmachine: (ha-942957-m02)   <os>
	I0318 13:10:49.890163 1085975 main.go:141] libmachine: (ha-942957-m02)     <type>hvm</type>
	I0318 13:10:49.890235 1085975 main.go:141] libmachine: (ha-942957-m02)     <boot dev='cdrom'/>
	I0318 13:10:49.890282 1085975 main.go:141] libmachine: (ha-942957-m02)     <boot dev='hd'/>
	I0318 13:10:49.890293 1085975 main.go:141] libmachine: (ha-942957-m02)     <bootmenu enable='no'/>
	I0318 13:10:49.890300 1085975 main.go:141] libmachine: (ha-942957-m02)   </os>
	I0318 13:10:49.890306 1085975 main.go:141] libmachine: (ha-942957-m02)   <devices>
	I0318 13:10:49.890313 1085975 main.go:141] libmachine: (ha-942957-m02)     <disk type='file' device='cdrom'>
	I0318 13:10:49.890321 1085975 main.go:141] libmachine: (ha-942957-m02)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/boot2docker.iso'/>
	I0318 13:10:49.890333 1085975 main.go:141] libmachine: (ha-942957-m02)       <target dev='hdc' bus='scsi'/>
	I0318 13:10:49.890339 1085975 main.go:141] libmachine: (ha-942957-m02)       <readonly/>
	I0318 13:10:49.890345 1085975 main.go:141] libmachine: (ha-942957-m02)     </disk>
	I0318 13:10:49.890354 1085975 main.go:141] libmachine: (ha-942957-m02)     <disk type='file' device='disk'>
	I0318 13:10:49.890365 1085975 main.go:141] libmachine: (ha-942957-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:10:49.890404 1085975 main.go:141] libmachine: (ha-942957-m02)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/ha-942957-m02.rawdisk'/>
	I0318 13:10:49.890429 1085975 main.go:141] libmachine: (ha-942957-m02)       <target dev='hda' bus='virtio'/>
	I0318 13:10:49.890440 1085975 main.go:141] libmachine: (ha-942957-m02)     </disk>
	I0318 13:10:49.890451 1085975 main.go:141] libmachine: (ha-942957-m02)     <interface type='network'>
	I0318 13:10:49.890466 1085975 main.go:141] libmachine: (ha-942957-m02)       <source network='mk-ha-942957'/>
	I0318 13:10:49.890478 1085975 main.go:141] libmachine: (ha-942957-m02)       <model type='virtio'/>
	I0318 13:10:49.890489 1085975 main.go:141] libmachine: (ha-942957-m02)     </interface>
	I0318 13:10:49.890500 1085975 main.go:141] libmachine: (ha-942957-m02)     <interface type='network'>
	I0318 13:10:49.890523 1085975 main.go:141] libmachine: (ha-942957-m02)       <source network='default'/>
	I0318 13:10:49.890545 1085975 main.go:141] libmachine: (ha-942957-m02)       <model type='virtio'/>
	I0318 13:10:49.890558 1085975 main.go:141] libmachine: (ha-942957-m02)     </interface>
	I0318 13:10:49.890568 1085975 main.go:141] libmachine: (ha-942957-m02)     <serial type='pty'>
	I0318 13:10:49.890595 1085975 main.go:141] libmachine: (ha-942957-m02)       <target port='0'/>
	I0318 13:10:49.890606 1085975 main.go:141] libmachine: (ha-942957-m02)     </serial>
	I0318 13:10:49.890618 1085975 main.go:141] libmachine: (ha-942957-m02)     <console type='pty'>
	I0318 13:10:49.890630 1085975 main.go:141] libmachine: (ha-942957-m02)       <target type='serial' port='0'/>
	I0318 13:10:49.890645 1085975 main.go:141] libmachine: (ha-942957-m02)     </console>
	I0318 13:10:49.890669 1085975 main.go:141] libmachine: (ha-942957-m02)     <rng model='virtio'>
	I0318 13:10:49.890692 1085975 main.go:141] libmachine: (ha-942957-m02)       <backend model='random'>/dev/random</backend>
	I0318 13:10:49.890709 1085975 main.go:141] libmachine: (ha-942957-m02)     </rng>
	I0318 13:10:49.890721 1085975 main.go:141] libmachine: (ha-942957-m02)     
	I0318 13:10:49.890730 1085975 main.go:141] libmachine: (ha-942957-m02)     
	I0318 13:10:49.890739 1085975 main.go:141] libmachine: (ha-942957-m02)   </devices>
	I0318 13:10:49.890750 1085975 main.go:141] libmachine: (ha-942957-m02) </domain>
	I0318 13:10:49.890765 1085975 main.go:141] libmachine: (ha-942957-m02) 
	I0318 13:10:49.897843 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:43:7a:a2 in network default
	I0318 13:10:49.898368 1085975 main.go:141] libmachine: (ha-942957-m02) Ensuring networks are active...
	I0318 13:10:49.898395 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:49.899121 1085975 main.go:141] libmachine: (ha-942957-m02) Ensuring network default is active
	I0318 13:10:49.899508 1085975 main.go:141] libmachine: (ha-942957-m02) Ensuring network mk-ha-942957 is active
	I0318 13:10:49.899822 1085975 main.go:141] libmachine: (ha-942957-m02) Getting domain xml...
	I0318 13:10:49.900586 1085975 main.go:141] libmachine: (ha-942957-m02) Creating domain...
	I0318 13:10:51.153496 1085975 main.go:141] libmachine: (ha-942957-m02) Waiting to get IP...
	I0318 13:10:51.154559 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:51.154977 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:51.155059 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:51.155004 1086314 retry.go:31] will retry after 304.73384ms: waiting for machine to come up
	I0318 13:10:51.461750 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:51.462228 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:51.462273 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:51.462153 1086314 retry.go:31] will retry after 316.844478ms: waiting for machine to come up
	I0318 13:10:51.782145 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:51.782615 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:51.782641 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:51.782559 1086314 retry.go:31] will retry after 484.230769ms: waiting for machine to come up
	I0318 13:10:52.268240 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:52.268810 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:52.268836 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:52.268772 1086314 retry.go:31] will retry after 523.434483ms: waiting for machine to come up
	I0318 13:10:52.793578 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:52.793983 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:52.794011 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:52.793928 1086314 retry.go:31] will retry after 497.999879ms: waiting for machine to come up
	I0318 13:10:53.293455 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:53.293955 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:53.293986 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:53.293916 1086314 retry.go:31] will retry after 673.425463ms: waiting for machine to come up
	I0318 13:10:53.969019 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:53.969485 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:53.969513 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:53.969422 1086314 retry.go:31] will retry after 847.284583ms: waiting for machine to come up
	I0318 13:10:54.818953 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:54.819333 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:54.819367 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:54.819304 1086314 retry.go:31] will retry after 1.325118174s: waiting for machine to come up
	I0318 13:10:56.145864 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:56.146313 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:56.146345 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:56.146257 1086314 retry.go:31] will retry after 1.795876809s: waiting for machine to come up
	I0318 13:10:57.944232 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:57.944761 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:57.944805 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:57.944713 1086314 retry.go:31] will retry after 1.744054736s: waiting for machine to come up
	I0318 13:10:59.691017 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:10:59.691544 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:10:59.691576 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:10:59.691495 1086314 retry.go:31] will retry after 2.51806491s: waiting for machine to come up
	I0318 13:11:02.212991 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:02.213429 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:11:02.213457 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:11:02.213377 1086314 retry.go:31] will retry after 2.637821328s: waiting for machine to come up
	I0318 13:11:04.852429 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:04.853031 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:11:04.853062 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:11:04.852991 1086314 retry.go:31] will retry after 3.347642909s: waiting for machine to come up
	I0318 13:11:08.204516 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:08.204861 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find current IP address of domain ha-942957-m02 in network mk-ha-942957
	I0318 13:11:08.204887 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | I0318 13:11:08.204815 1086314 retry.go:31] will retry after 5.549852077s: waiting for machine to come up
	I0318 13:11:13.760003 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.760478 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.760511 1085975 main.go:141] libmachine: (ha-942957-m02) Found IP for machine: 192.168.39.22
	I0318 13:11:13.760526 1085975 main.go:141] libmachine: (ha-942957-m02) Reserving static IP address...
	I0318 13:11:13.760873 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | unable to find host DHCP lease matching {name: "ha-942957-m02", mac: "52:54:00:20:c9:87", ip: "192.168.39.22"} in network mk-ha-942957
	I0318 13:11:13.838869 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Getting to WaitForSSH function...
	I0318 13:11:13.838902 1085975 main.go:141] libmachine: (ha-942957-m02) Reserved static IP address: 192.168.39.22
	I0318 13:11:13.838914 1085975 main.go:141] libmachine: (ha-942957-m02) Waiting for SSH to be available...
	I0318 13:11:13.841898 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.842346 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:13.842371 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.842503 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Using SSH client type: external
	I0318 13:11:13.842529 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa (-rw-------)
	I0318 13:11:13.842557 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:11:13.842588 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | About to run SSH command:
	I0318 13:11:13.842600 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | exit 0
	I0318 13:11:13.967996 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | SSH cmd err, output: <nil>: 
	I0318 13:11:13.968301 1085975 main.go:141] libmachine: (ha-942957-m02) KVM machine creation complete!
	I0318 13:11:13.968615 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetConfigRaw
	I0318 13:11:13.969177 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:13.969423 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:13.969620 1085975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:11:13.969635 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:11:13.970999 1085975 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:11:13.971014 1085975 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:11:13.971020 1085975 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:11:13.971026 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:13.973564 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.973927 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:13.973959 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:13.974086 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:13.974255 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:13.974426 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:13.974574 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:13.974746 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:13.975017 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:13.975034 1085975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:11:14.079477 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:11:14.079502 1085975 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:11:14.079511 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.082270 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.082612 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.082646 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.082882 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.083098 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.083251 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.083391 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.083543 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.083762 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.083775 1085975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:11:14.189208 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:11:14.189290 1085975 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:11:14.189297 1085975 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:11:14.189305 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:11:14.189643 1085975 buildroot.go:166] provisioning hostname "ha-942957-m02"
	I0318 13:11:14.189681 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:11:14.189889 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.192754 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.193121 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.193167 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.193313 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.193508 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.193730 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.193907 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.194106 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.194327 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.194346 1085975 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957-m02 && echo "ha-942957-m02" | sudo tee /etc/hostname
	I0318 13:11:14.315415 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957-m02
	
	I0318 13:11:14.315443 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.318088 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.318455 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.318488 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.318653 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.318890 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.319045 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.319152 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.319373 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.319598 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.319617 1085975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:11:14.442263 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:11:14.442300 1085975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:11:14.442322 1085975 buildroot.go:174] setting up certificates
	I0318 13:11:14.442333 1085975 provision.go:84] configureAuth start
	I0318 13:11:14.442343 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetMachineName
	I0318 13:11:14.442679 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:14.445488 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.445885 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.445912 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.446082 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.448758 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.449199 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.449231 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.449354 1085975 provision.go:143] copyHostCerts
	I0318 13:11:14.449388 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:11:14.449430 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:11:14.449442 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:11:14.449524 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:11:14.449636 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:11:14.449661 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:11:14.449669 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:11:14.449708 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:11:14.449786 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:11:14.449815 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:11:14.449824 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:11:14.449861 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:11:14.449945 1085975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957-m02 san=[127.0.0.1 192.168.39.22 ha-942957-m02 localhost minikube]
	I0318 13:11:14.734550 1085975 provision.go:177] copyRemoteCerts
	I0318 13:11:14.734648 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:11:14.734686 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.737413 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.737766 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.737801 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.737957 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.738194 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.738412 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.738568 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:14.823317 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:11:14.823424 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:11:14.849854 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:11:14.849947 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 13:11:14.876765 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:11:14.876861 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:11:14.903102 1085975 provision.go:87] duration metric: took 460.755262ms to configureAuth
	I0318 13:11:14.903140 1085975 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:11:14.903369 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:11:14.903473 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:14.906201 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.906520 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:14.906557 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:14.906669 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:14.906899 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.907068 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:14.907201 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:14.907379 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:14.907563 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:14.907578 1085975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:11:15.186532 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:11:15.186574 1085975 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:11:15.186586 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetURL
	I0318 13:11:15.188285 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | Using libvirt version 6000000
	I0318 13:11:15.190769 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.191366 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.191400 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.191617 1085975 main.go:141] libmachine: Docker is up and running!
	I0318 13:11:15.191641 1085975 main.go:141] libmachine: Reticulating splines...
	I0318 13:11:15.191651 1085975 client.go:171] duration metric: took 25.741291565s to LocalClient.Create
	I0318 13:11:15.191697 1085975 start.go:167] duration metric: took 25.74137213s to libmachine.API.Create "ha-942957"
	I0318 13:11:15.191710 1085975 start.go:293] postStartSetup for "ha-942957-m02" (driver="kvm2")
	I0318 13:11:15.191724 1085975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:11:15.191766 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.192104 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:11:15.192136 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:15.194725 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.195138 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.195180 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.195321 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.195571 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.195751 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.195928 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:15.282716 1085975 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:11:15.287369 1085975 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:11:15.287401 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:11:15.287470 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:11:15.287543 1085975 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:11:15.287555 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:11:15.287636 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:11:15.297867 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:11:15.324585 1085975 start.go:296] duration metric: took 132.860177ms for postStartSetup
	I0318 13:11:15.324662 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetConfigRaw
	I0318 13:11:15.325299 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:15.327886 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.328282 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.328318 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.328584 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:11:15.328841 1085975 start.go:128] duration metric: took 25.897193359s to createHost
	I0318 13:11:15.328875 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:15.330988 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.331414 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.331443 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.331557 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.331765 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.331959 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.332072 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.332204 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:15.332383 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0318 13:11:15.332396 1085975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:11:15.436627 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767475.410528309
	
	I0318 13:11:15.436659 1085975 fix.go:216] guest clock: 1710767475.410528309
	I0318 13:11:15.436670 1085975 fix.go:229] Guest: 2024-03-18 13:11:15.410528309 +0000 UTC Remote: 2024-03-18 13:11:15.32885812 +0000 UTC m=+83.787736789 (delta=81.670189ms)
	I0318 13:11:15.436693 1085975 fix.go:200] guest clock delta is within tolerance: 81.670189ms
	I0318 13:11:15.436699 1085975 start.go:83] releasing machines lock for "ha-942957-m02", held for 26.005152464s
	I0318 13:11:15.436732 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.437022 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:15.439753 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.440231 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.440262 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.443148 1085975 out.go:177] * Found network options:
	I0318 13:11:15.444848 1085975 out.go:177]   - NO_PROXY=192.168.39.68
	W0318 13:11:15.446278 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:11:15.446312 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.446913 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.447126 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:11:15.447226 1085975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:11:15.447271 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	W0318 13:11:15.447383 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:11:15.447492 1085975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:11:15.447518 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:11:15.450153 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450259 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450612 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.450656 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450681 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:15.450702 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:15.450767 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.450930 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:11:15.451007 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.451222 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:11:15.451235 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.451373 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:11:15.451380 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:15.451511 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:11:15.687134 1085975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:11:15.694155 1085975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:11:15.694234 1085975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:11:15.711687 1085975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:11:15.711720 1085975 start.go:494] detecting cgroup driver to use...
	I0318 13:11:15.711808 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:11:15.734540 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:11:15.750975 1085975 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:11:15.751061 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:11:15.767571 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:11:15.784124 1085975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:11:15.911047 1085975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:11:16.068271 1085975 docker.go:233] disabling docker service ...
	I0318 13:11:16.068357 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:11:16.083266 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:11:16.096925 1085975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:11:16.222985 1085975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:11:16.346650 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:11:16.362877 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:11:16.383435 1085975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:11:16.383514 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.395001 1085975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:11:16.395092 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.406297 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.417592 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:11:16.428964 1085975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:11:16.442564 1085975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:11:16.453040 1085975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:11:16.453116 1085975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:11:16.467808 1085975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:11:16.478795 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:11:16.591469 1085975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:11:16.753636 1085975 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:11:16.753740 1085975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:11:16.760573 1085975 start.go:562] Will wait 60s for crictl version
	I0318 13:11:16.760654 1085975 ssh_runner.go:195] Run: which crictl
	I0318 13:11:16.764828 1085975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:11:16.806750 1085975 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:11:16.806834 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:11:16.839735 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:11:16.874776 1085975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:11:16.876760 1085975 out.go:177]   - env NO_PROXY=192.168.39.68
	I0318 13:11:16.878161 1085975 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:11:16.880934 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:16.881244 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:04 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:11:16.881275 1085975 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:11:16.881461 1085975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:11:16.885882 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:11:16.899647 1085975 mustload.go:65] Loading cluster: ha-942957
	I0318 13:11:16.899899 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:11:16.900251 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:11:16.900290 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:11:16.915276 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0318 13:11:16.915848 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:11:16.916403 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:11:16.916431 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:11:16.916756 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:11:16.916967 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:11:16.918424 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:11:16.918730 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:11:16.918756 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:11:16.934538 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I0318 13:11:16.935009 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:11:16.935483 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:11:16.935504 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:11:16.935928 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:11:16.936174 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:11:16.936354 1085975 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.22
	I0318 13:11:16.936370 1085975 certs.go:194] generating shared ca certs ...
	I0318 13:11:16.936388 1085975 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:11:16.936572 1085975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:11:16.936647 1085975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:11:16.936664 1085975 certs.go:256] generating profile certs ...
	I0318 13:11:16.936761 1085975 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:11:16.936790 1085975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969
	I0318 13:11:16.936813 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.22 192.168.39.254]
	I0318 13:11:17.106959 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969 ...
	I0318 13:11:17.107000 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969: {Name:mk47891d09d3218143fd117c3b834e8a2af0c3c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:11:17.107204 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969 ...
	I0318 13:11:17.107228 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969: {Name:mka2d870b8258374f0d23ed255f4b0a26e71e372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:11:17.107334 1085975 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.54e83969 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:11:17.107522 1085975 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.54e83969 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:11:17.107699 1085975 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:11:17.107720 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:11:17.107741 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:11:17.107761 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:11:17.107780 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:11:17.107796 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:11:17.107812 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:11:17.107855 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:11:17.107876 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:11:17.107947 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:11:17.107995 1085975 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:11:17.108009 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:11:17.108044 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:11:17.108075 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:11:17.108108 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:11:17.108167 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:11:17.108201 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.108221 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.108238 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.108283 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:11:17.111760 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:17.112280 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:11:17.112308 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:17.112503 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:11:17.112707 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:11:17.112883 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:11:17.113061 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:11:17.188296 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 13:11:17.194736 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 13:11:17.211739 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 13:11:17.218802 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 13:11:17.231248 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 13:11:17.236559 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 13:11:17.249096 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 13:11:17.254541 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 13:11:17.273937 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 13:11:17.278978 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 13:11:17.291903 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 13:11:17.296810 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 13:11:17.309619 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:11:17.338014 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:11:17.364671 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:11:17.391267 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:11:17.418320 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 13:11:17.445903 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:11:17.472474 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:11:17.501830 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:11:17.528982 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:11:17.560200 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:11:17.587636 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:11:17.615376 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 13:11:17.636805 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 13:11:17.656270 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 13:11:17.674993 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 13:11:17.693944 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 13:11:17.712572 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 13:11:17.730647 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 13:11:17.749157 1085975 ssh_runner.go:195] Run: openssl version
	I0318 13:11:17.755339 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:11:17.766839 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.772000 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.772067 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:11:17.778092 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:11:17.789516 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:11:17.801214 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.806217 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.806292 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:11:17.812289 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:11:17.824223 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:11:17.835996 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.841063 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.841156 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:11:17.847062 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:11:17.858651 1085975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:11:17.863636 1085975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:11:17.863694 1085975 kubeadm.go:928] updating node {m02 192.168.39.22 8443 v1.28.4 crio true true} ...
	I0318 13:11:17.863781 1085975 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:11:17.863807 1085975 kube-vip.go:111] generating kube-vip config ...
	I0318 13:11:17.863872 1085975 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:11:17.882420 1085975 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:11:17.882502 1085975 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:11:17.882568 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:11:17.893248 1085975 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 13:11:17.893338 1085975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 13:11:17.903883 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 13:11:17.903931 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:11:17.903981 1085975 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0318 13:11:17.904009 1085975 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0318 13:11:17.904062 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:11:17.908907 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 13:11:17.908946 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 13:11:18.719373 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:11:18.719461 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:11:18.724692 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 13:11:18.724730 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 13:11:19.428440 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:11:19.443936 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:11:19.444038 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:11:19.448789 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 13:11:19.448829 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 13:11:19.941599 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 13:11:19.951962 1085975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0318 13:11:19.970095 1085975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:11:19.989398 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:11:20.008620 1085975 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:11:20.013237 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:11:20.027096 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:11:20.167553 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:11:20.185859 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:11:20.186296 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:11:20.186337 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:11:20.202883 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0318 13:11:20.203406 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:11:20.203982 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:11:20.204011 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:11:20.204340 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:11:20.204519 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:11:20.204705 1085975 start.go:316] joinCluster: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:11:20.204830 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 13:11:20.204850 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:11:20.208445 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:20.208966 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:11:20.208998 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:11:20.209148 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:11:20.209377 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:11:20.209525 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:11:20.209766 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:11:20.378572 1085975 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:11:20.378656 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j016fd.03qgv2nms34rlin2 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I0318 13:12:01.574184 1085975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j016fd.03qgv2nms34rlin2 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (41.195478751s)
	I0318 13:12:01.574238 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 13:12:02.046655 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-942957-m02 minikube.k8s.io/updated_at=2024_03_18T13_12_02_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=ha-942957 minikube.k8s.io/primary=false
	I0318 13:12:02.208529 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-942957-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 13:12:02.340804 1085975 start.go:318] duration metric: took 42.136091091s to joinCluster
	I0318 13:12:02.340915 1085975 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:12:02.342460 1085975 out.go:177] * Verifying Kubernetes components...
	I0318 13:12:02.341244 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:02.344035 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:02.534197 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:12:02.563224 1085975 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:12:02.563519 1085975 kapi.go:59] client config for ha-942957: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 13:12:02.563585 1085975 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0318 13:12:02.563853 1085975 node_ready.go:35] waiting up to 6m0s for node "ha-942957-m02" to be "Ready" ...
	I0318 13:12:02.563982 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:02.563992 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:02.564004 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:02.564012 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:02.574687 1085975 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 13:12:03.064611 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:03.064638 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:03.064648 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:03.064652 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:03.068752 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:03.564159 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:03.564184 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:03.564192 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:03.564195 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:03.568404 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:04.064588 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:04.064619 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:04.064631 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:04.064638 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:04.068822 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:04.565106 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:04.565138 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:04.565150 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:04.565156 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:04.570251 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:04.571105 1085975 node_ready.go:53] node "ha-942957-m02" has status "Ready":"False"
	I0318 13:12:05.065135 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:05.065159 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:05.065168 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:05.065172 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:05.069201 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:05.564819 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:05.564842 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:05.564851 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:05.564857 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:05.568616 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:06.064804 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:06.064830 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:06.064839 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:06.064845 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:06.068773 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:06.564987 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:06.565014 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:06.565024 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:06.565029 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:06.570196 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:07.064990 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:07.065047 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:07.065079 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:07.065086 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:07.069615 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:07.070403 1085975 node_ready.go:53] node "ha-942957-m02" has status "Ready":"False"
	I0318 13:12:07.564765 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:07.564792 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:07.564803 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:07.564808 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:07.569145 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.064191 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:08.064225 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.064237 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.064243 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.068300 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.564610 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:08.564637 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.564645 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.564649 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.569535 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.570588 1085975 node_ready.go:49] node "ha-942957-m02" has status "Ready":"True"
	I0318 13:12:08.570621 1085975 node_ready.go:38] duration metric: took 6.006728756s for node "ha-942957-m02" to be "Ready" ...
	I0318 13:12:08.570633 1085975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:12:08.570743 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:08.570757 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.570768 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.570772 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.577271 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:12:08.586296 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.586396 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-f6dtz
	I0318 13:12:08.586404 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.586413 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.586422 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.590423 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:08.591241 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:08.591262 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.591272 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.591275 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.595365 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.595905 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:08.595930 1085975 pod_ready.go:81] duration metric: took 9.60406ms for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.595943 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.596031 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pbr9j
	I0318 13:12:08.596042 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.596053 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.596061 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.600342 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:08.600929 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:08.600947 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.600954 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.600957 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.604171 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:08.604882 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:08.604900 1085975 pod_ready.go:81] duration metric: took 8.948996ms for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.604909 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.604970 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957
	I0318 13:12:08.604980 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.604987 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.604990 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.608453 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:08.609532 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:08.609552 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.609562 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.609568 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.616023 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:12:08.616522 1085975 pod_ready.go:92] pod "etcd-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:08.616543 1085975 pod_ready.go:81] duration metric: took 11.628043ms for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.616553 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:08.616608 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:08.616616 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.616623 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.616628 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.619449 1085975 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:12:08.620219 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:08.620236 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:08.620245 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:08.620254 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:08.624122 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:09.117259 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:09.117286 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.117294 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.117299 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.121328 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:09.122550 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:09.122575 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.122587 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.122592 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.126093 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:09.617603 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:09.617628 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.617636 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.617639 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.622168 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:09.622818 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:09.622833 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:09.622842 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:09.622846 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:09.626659 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:10.116778 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:10.116806 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.116815 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.116819 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.121067 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:10.121781 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:10.121800 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.121809 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.121813 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.125700 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:10.617195 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:10.617222 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.617230 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.617234 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.621328 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:10.622361 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:10.622379 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:10.622387 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:10.622390 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:10.626338 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:10.627104 1085975 pod_ready.go:102] pod "etcd-ha-942957-m02" in "kube-system" namespace has status "Ready":"False"
	I0318 13:12:11.117155 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:11.117185 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.117198 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.117204 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.121032 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:11.121754 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:11.121781 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.121792 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.121796 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.125242 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:11.617831 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:11.617863 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.617872 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.617878 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.622109 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:11.622796 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:11.622814 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:11.622822 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:11.622826 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:11.626557 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.117269 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:12:12.117307 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.117318 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.117324 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.121557 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.122508 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.122528 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.122538 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.122543 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.126397 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.127280 1085975 pod_ready.go:92] pod "etcd-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.127310 1085975 pod_ready.go:81] duration metric: took 3.510749299s for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.127332 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.127414 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957
	I0318 13:12:12.127426 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.127435 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.127439 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.131271 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.131953 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:12.131971 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.131978 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.131983 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.135022 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.135593 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.135618 1085975 pod_ready.go:81] duration metric: took 8.278692ms for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.135628 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.135693 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m02
	I0318 13:12:12.135701 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.135708 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.135712 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.138941 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.165000 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.165028 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.165039 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.165045 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.169099 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.169620 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.169648 1085975 pod_ready.go:81] duration metric: took 34.012245ms for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.169660 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.365192 1085975 request.go:629] Waited for 195.414508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:12:12.365279 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:12:12.365287 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.365297 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.365308 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.369036 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.565315 1085975 request.go:629] Waited for 195.410515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:12.565400 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:12.565406 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.565414 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.565419 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.569346 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:12.570230 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.570250 1085975 pod_ready.go:81] duration metric: took 400.582661ms for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.570262 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.765397 1085975 request.go:629] Waited for 195.030021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:12:12.765517 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:12:12.765531 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.765542 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.765553 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.769705 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.964743 1085975 request.go:629] Waited for 194.327407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.964831 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:12.964837 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:12.964845 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:12.964854 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:12.968992 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:12.970137 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:12.970163 1085975 pod_ready.go:81] duration metric: took 399.894488ms for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:12.970175 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.165361 1085975 request.go:629] Waited for 195.053042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:12:13.165480 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:12:13.165494 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.165506 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.165518 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.169678 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:13.364719 1085975 request.go:629] Waited for 194.292818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:13.364793 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:13.364799 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.364806 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.364811 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.368495 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:13.369594 1085975 pod_ready.go:92] pod "kube-proxy-97vsd" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:13.369622 1085975 pod_ready.go:81] duration metric: took 399.430259ms for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.369636 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.565618 1085975 request.go:629] Waited for 195.883659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:12:13.565721 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:12:13.565733 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.565744 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.565751 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.569905 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:13.765023 1085975 request.go:629] Waited for 194.327941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:13.765105 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:13.765116 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.765127 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.765135 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.770162 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:13.770945 1085975 pod_ready.go:92] pod "kube-proxy-vjmnr" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:13.770968 1085975 pod_ready.go:81] duration metric: took 401.309863ms for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.770981 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:13.965017 1085975 request.go:629] Waited for 193.951484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:12:13.965120 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:12:13.965130 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:13.965139 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:13.965148 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:13.970848 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:14.164866 1085975 request.go:629] Waited for 192.304183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:14.164954 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:12:14.164962 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.164970 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.164981 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.170090 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:14.170594 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:14.170618 1085975 pod_ready.go:81] duration metric: took 399.629246ms for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:14.170627 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:14.364647 1085975 request.go:629] Waited for 193.89019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:12:14.364750 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:12:14.364757 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.364779 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.364787 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.368979 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:14.565100 1085975 request.go:629] Waited for 195.491375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:14.565185 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:12:14.565193 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.565230 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.565240 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.568977 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:12:14.569428 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:12:14.569449 1085975 pod_ready.go:81] duration metric: took 398.814314ms for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:12:14.569465 1085975 pod_ready.go:38] duration metric: took 5.998795055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:12:14.569487 1085975 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:12:14.569553 1085975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:12:14.585465 1085975 api_server.go:72] duration metric: took 12.244501387s to wait for apiserver process to appear ...
	I0318 13:12:14.585496 1085975 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:12:14.585519 1085975 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0318 13:12:14.592581 1085975 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0318 13:12:14.592670 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0318 13:12:14.592679 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.592688 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.592691 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.594047 1085975 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:12:14.594177 1085975 api_server.go:141] control plane version: v1.28.4
	I0318 13:12:14.594197 1085975 api_server.go:131] duration metric: took 8.694439ms to wait for apiserver health ...
	I0318 13:12:14.594206 1085975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:12:14.765641 1085975 request.go:629] Waited for 171.352888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:14.765739 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:14.765745 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.765753 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.765758 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.771766 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:12:14.778312 1085975 system_pods.go:59] 17 kube-system pods found
	I0318 13:12:14.778347 1085975 system_pods.go:61] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:12:14.778352 1085975 system_pods.go:61] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:12:14.778356 1085975 system_pods.go:61] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:12:14.778359 1085975 system_pods.go:61] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:12:14.778362 1085975 system_pods.go:61] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:12:14.778365 1085975 system_pods.go:61] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:12:14.778368 1085975 system_pods.go:61] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:12:14.778371 1085975 system_pods.go:61] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:12:14.778374 1085975 system_pods.go:61] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:12:14.778377 1085975 system_pods.go:61] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:12:14.778380 1085975 system_pods.go:61] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:12:14.778383 1085975 system_pods.go:61] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:12:14.778387 1085975 system_pods.go:61] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:12:14.778392 1085975 system_pods.go:61] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:12:14.778396 1085975 system_pods.go:61] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:12:14.778401 1085975 system_pods.go:61] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:12:14.778405 1085975 system_pods.go:61] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:12:14.778412 1085975 system_pods.go:74] duration metric: took 184.198851ms to wait for pod list to return data ...
	I0318 13:12:14.778423 1085975 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:12:14.964815 1085975 request.go:629] Waited for 186.305806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:12:14.964937 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:12:14.964949 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:14.964960 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:14.964969 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:14.969113 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:14.969358 1085975 default_sa.go:45] found service account: "default"
	I0318 13:12:14.969375 1085975 default_sa.go:55] duration metric: took 190.945537ms for default service account to be created ...
	I0318 13:12:14.969385 1085975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:12:15.164764 1085975 request.go:629] Waited for 195.302748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:15.164836 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:12:15.164846 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:15.164857 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:15.164865 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:15.176701 1085975 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:12:15.181088 1085975 system_pods.go:86] 17 kube-system pods found
	I0318 13:12:15.181122 1085975 system_pods.go:89] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:12:15.181128 1085975 system_pods.go:89] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:12:15.181132 1085975 system_pods.go:89] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:12:15.181137 1085975 system_pods.go:89] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:12:15.181141 1085975 system_pods.go:89] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:12:15.181144 1085975 system_pods.go:89] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:12:15.181148 1085975 system_pods.go:89] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:12:15.181152 1085975 system_pods.go:89] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:12:15.181156 1085975 system_pods.go:89] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:12:15.181160 1085975 system_pods.go:89] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:12:15.181164 1085975 system_pods.go:89] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:12:15.181168 1085975 system_pods.go:89] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:12:15.181173 1085975 system_pods.go:89] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:12:15.181179 1085975 system_pods.go:89] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:12:15.181185 1085975 system_pods.go:89] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:12:15.181190 1085975 system_pods.go:89] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:12:15.181203 1085975 system_pods.go:89] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:12:15.181218 1085975 system_pods.go:126] duration metric: took 211.825119ms to wait for k8s-apps to be running ...
	I0318 13:12:15.181227 1085975 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:12:15.181292 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:12:15.200909 1085975 system_svc.go:56] duration metric: took 19.671034ms WaitForService to wait for kubelet
	I0318 13:12:15.200945 1085975 kubeadm.go:576] duration metric: took 12.859991161s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:12:15.200967 1085975 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:12:15.364750 1085975 request.go:629] Waited for 163.690957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0318 13:12:15.364843 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0318 13:12:15.364853 1085975 round_trippers.go:469] Request Headers:
	I0318 13:12:15.364860 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:12:15.364865 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:12:15.369136 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:12:15.370044 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:12:15.370072 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:12:15.370123 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:12:15.370128 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:12:15.370133 1085975 node_conditions.go:105] duration metric: took 169.161669ms to run NodePressure ...
	I0318 13:12:15.370148 1085975 start.go:240] waiting for startup goroutines ...
	I0318 13:12:15.370186 1085975 start.go:254] writing updated cluster config ...
	I0318 13:12:15.372826 1085975 out.go:177] 
	I0318 13:12:15.374338 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:15.374436 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:12:15.376117 1085975 out.go:177] * Starting "ha-942957-m03" control-plane node in "ha-942957" cluster
	I0318 13:12:15.377275 1085975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:12:15.377299 1085975 cache.go:56] Caching tarball of preloaded images
	I0318 13:12:15.377441 1085975 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:12:15.377458 1085975 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:12:15.377602 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:12:15.377814 1085975 start.go:360] acquireMachinesLock for ha-942957-m03: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:12:15.377864 1085975 start.go:364] duration metric: took 27.524µs to acquireMachinesLock for "ha-942957-m03"
	I0318 13:12:15.377885 1085975 start.go:93] Provisioning new machine with config: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:12:15.378046 1085975 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0318 13:12:15.379855 1085975 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:12:15.379949 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:15.379990 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:15.395657 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I0318 13:12:15.396172 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:15.396719 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:15.396767 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:15.397203 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:15.397479 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:15.397631 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:15.397839 1085975 start.go:159] libmachine.API.Create for "ha-942957" (driver="kvm2")
	I0318 13:12:15.397881 1085975 client.go:168] LocalClient.Create starting
	I0318 13:12:15.397922 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 13:12:15.397974 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:12:15.397995 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:12:15.398101 1085975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 13:12:15.398132 1085975 main.go:141] libmachine: Decoding PEM data...
	I0318 13:12:15.398149 1085975 main.go:141] libmachine: Parsing certificate...
	I0318 13:12:15.398176 1085975 main.go:141] libmachine: Running pre-create checks...
	I0318 13:12:15.398188 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .PreCreateCheck
	I0318 13:12:15.398386 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetConfigRaw
	I0318 13:12:15.398904 1085975 main.go:141] libmachine: Creating machine...
	I0318 13:12:15.398923 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .Create
	I0318 13:12:15.399093 1085975 main.go:141] libmachine: (ha-942957-m03) Creating KVM machine...
	I0318 13:12:15.400488 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found existing default KVM network
	I0318 13:12:15.400628 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found existing private KVM network mk-ha-942957
	I0318 13:12:15.400841 1085975 main.go:141] libmachine: (ha-942957-m03) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03 ...
	I0318 13:12:15.400865 1085975 main.go:141] libmachine: (ha-942957-m03) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:12:15.403983 1085975 main.go:141] libmachine: (ha-942957-m03) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:12:15.404022 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.400817 1086668 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:12:15.659790 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.659650 1086668 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa...
	I0318 13:12:15.863819 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.863658 1086668 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/ha-942957-m03.rawdisk...
	I0318 13:12:15.863879 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Writing magic tar header
	I0318 13:12:15.863891 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Writing SSH key tar header
	I0318 13:12:15.863900 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:15.863777 1086668 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03 ...
	I0318 13:12:15.863932 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03
	I0318 13:12:15.863954 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03 (perms=drwx------)
	I0318 13:12:15.863964 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 13:12:15.864046 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:12:15.864074 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:12:15.864086 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 13:12:15.864107 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:12:15.864120 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:12:15.864137 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Checking permissions on dir: /home
	I0318 13:12:15.864149 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Skipping /home - not owner
	I0318 13:12:15.864188 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 13:12:15.864217 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 13:12:15.864235 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:12:15.864248 1085975 main.go:141] libmachine: (ha-942957-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:12:15.864263 1085975 main.go:141] libmachine: (ha-942957-m03) Creating domain...
	I0318 13:12:15.865164 1085975 main.go:141] libmachine: (ha-942957-m03) define libvirt domain using xml: 
	I0318 13:12:15.865184 1085975 main.go:141] libmachine: (ha-942957-m03) <domain type='kvm'>
	I0318 13:12:15.865194 1085975 main.go:141] libmachine: (ha-942957-m03)   <name>ha-942957-m03</name>
	I0318 13:12:15.865206 1085975 main.go:141] libmachine: (ha-942957-m03)   <memory unit='MiB'>2200</memory>
	I0318 13:12:15.865215 1085975 main.go:141] libmachine: (ha-942957-m03)   <vcpu>2</vcpu>
	I0318 13:12:15.865224 1085975 main.go:141] libmachine: (ha-942957-m03)   <features>
	I0318 13:12:15.865233 1085975 main.go:141] libmachine: (ha-942957-m03)     <acpi/>
	I0318 13:12:15.865243 1085975 main.go:141] libmachine: (ha-942957-m03)     <apic/>
	I0318 13:12:15.865251 1085975 main.go:141] libmachine: (ha-942957-m03)     <pae/>
	I0318 13:12:15.865260 1085975 main.go:141] libmachine: (ha-942957-m03)     
	I0318 13:12:15.865267 1085975 main.go:141] libmachine: (ha-942957-m03)   </features>
	I0318 13:12:15.865278 1085975 main.go:141] libmachine: (ha-942957-m03)   <cpu mode='host-passthrough'>
	I0318 13:12:15.865288 1085975 main.go:141] libmachine: (ha-942957-m03)   
	I0318 13:12:15.865294 1085975 main.go:141] libmachine: (ha-942957-m03)   </cpu>
	I0318 13:12:15.865303 1085975 main.go:141] libmachine: (ha-942957-m03)   <os>
	I0318 13:12:15.865310 1085975 main.go:141] libmachine: (ha-942957-m03)     <type>hvm</type>
	I0318 13:12:15.865322 1085975 main.go:141] libmachine: (ha-942957-m03)     <boot dev='cdrom'/>
	I0318 13:12:15.865330 1085975 main.go:141] libmachine: (ha-942957-m03)     <boot dev='hd'/>
	I0318 13:12:15.865339 1085975 main.go:141] libmachine: (ha-942957-m03)     <bootmenu enable='no'/>
	I0318 13:12:15.865349 1085975 main.go:141] libmachine: (ha-942957-m03)   </os>
	I0318 13:12:15.865358 1085975 main.go:141] libmachine: (ha-942957-m03)   <devices>
	I0318 13:12:15.865369 1085975 main.go:141] libmachine: (ha-942957-m03)     <disk type='file' device='cdrom'>
	I0318 13:12:15.865386 1085975 main.go:141] libmachine: (ha-942957-m03)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/boot2docker.iso'/>
	I0318 13:12:15.865397 1085975 main.go:141] libmachine: (ha-942957-m03)       <target dev='hdc' bus='scsi'/>
	I0318 13:12:15.865405 1085975 main.go:141] libmachine: (ha-942957-m03)       <readonly/>
	I0318 13:12:15.865414 1085975 main.go:141] libmachine: (ha-942957-m03)     </disk>
	I0318 13:12:15.865424 1085975 main.go:141] libmachine: (ha-942957-m03)     <disk type='file' device='disk'>
	I0318 13:12:15.865436 1085975 main.go:141] libmachine: (ha-942957-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:12:15.865449 1085975 main.go:141] libmachine: (ha-942957-m03)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/ha-942957-m03.rawdisk'/>
	I0318 13:12:15.865460 1085975 main.go:141] libmachine: (ha-942957-m03)       <target dev='hda' bus='virtio'/>
	I0318 13:12:15.865486 1085975 main.go:141] libmachine: (ha-942957-m03)     </disk>
	I0318 13:12:15.865498 1085975 main.go:141] libmachine: (ha-942957-m03)     <interface type='network'>
	I0318 13:12:15.865508 1085975 main.go:141] libmachine: (ha-942957-m03)       <source network='mk-ha-942957'/>
	I0318 13:12:15.865515 1085975 main.go:141] libmachine: (ha-942957-m03)       <model type='virtio'/>
	I0318 13:12:15.865527 1085975 main.go:141] libmachine: (ha-942957-m03)     </interface>
	I0318 13:12:15.865539 1085975 main.go:141] libmachine: (ha-942957-m03)     <interface type='network'>
	I0318 13:12:15.865551 1085975 main.go:141] libmachine: (ha-942957-m03)       <source network='default'/>
	I0318 13:12:15.865561 1085975 main.go:141] libmachine: (ha-942957-m03)       <model type='virtio'/>
	I0318 13:12:15.865574 1085975 main.go:141] libmachine: (ha-942957-m03)     </interface>
	I0318 13:12:15.865584 1085975 main.go:141] libmachine: (ha-942957-m03)     <serial type='pty'>
	I0318 13:12:15.865596 1085975 main.go:141] libmachine: (ha-942957-m03)       <target port='0'/>
	I0318 13:12:15.865606 1085975 main.go:141] libmachine: (ha-942957-m03)     </serial>
	I0318 13:12:15.865615 1085975 main.go:141] libmachine: (ha-942957-m03)     <console type='pty'>
	I0318 13:12:15.865626 1085975 main.go:141] libmachine: (ha-942957-m03)       <target type='serial' port='0'/>
	I0318 13:12:15.865638 1085975 main.go:141] libmachine: (ha-942957-m03)     </console>
	I0318 13:12:15.865648 1085975 main.go:141] libmachine: (ha-942957-m03)     <rng model='virtio'>
	I0318 13:12:15.865660 1085975 main.go:141] libmachine: (ha-942957-m03)       <backend model='random'>/dev/random</backend>
	I0318 13:12:15.865666 1085975 main.go:141] libmachine: (ha-942957-m03)     </rng>
	I0318 13:12:15.865677 1085975 main.go:141] libmachine: (ha-942957-m03)     
	I0318 13:12:15.865686 1085975 main.go:141] libmachine: (ha-942957-m03)     
	I0318 13:12:15.865694 1085975 main.go:141] libmachine: (ha-942957-m03)   </devices>
	I0318 13:12:15.865703 1085975 main.go:141] libmachine: (ha-942957-m03) </domain>
	I0318 13:12:15.865714 1085975 main.go:141] libmachine: (ha-942957-m03) 
	I0318 13:12:15.873143 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:c3:8a:cc in network default
	I0318 13:12:15.873857 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:15.873896 1085975 main.go:141] libmachine: (ha-942957-m03) Ensuring networks are active...
	I0318 13:12:15.874702 1085975 main.go:141] libmachine: (ha-942957-m03) Ensuring network default is active
	I0318 13:12:15.875110 1085975 main.go:141] libmachine: (ha-942957-m03) Ensuring network mk-ha-942957 is active
	I0318 13:12:15.875537 1085975 main.go:141] libmachine: (ha-942957-m03) Getting domain xml...
	I0318 13:12:15.876385 1085975 main.go:141] libmachine: (ha-942957-m03) Creating domain...
	I0318 13:12:17.113074 1085975 main.go:141] libmachine: (ha-942957-m03) Waiting to get IP...
	I0318 13:12:17.113884 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:17.114363 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:17.114412 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:17.114345 1086668 retry.go:31] will retry after 201.949613ms: waiting for machine to come up
	I0318 13:12:17.317842 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:17.318361 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:17.318386 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:17.318315 1086668 retry.go:31] will retry after 361.088581ms: waiting for machine to come up
	I0318 13:12:17.681105 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:17.681546 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:17.681582 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:17.681503 1086668 retry.go:31] will retry after 417.612899ms: waiting for machine to come up
	I0318 13:12:18.101244 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:18.101743 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:18.101768 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:18.101706 1086668 retry.go:31] will retry after 398.155429ms: waiting for machine to come up
	I0318 13:12:18.502103 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:18.502489 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:18.502519 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:18.502464 1086668 retry.go:31] will retry after 604.308205ms: waiting for machine to come up
	I0318 13:12:19.108316 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:19.108744 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:19.108775 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:19.108697 1086668 retry.go:31] will retry after 891.677543ms: waiting for machine to come up
	I0318 13:12:20.002548 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:20.003175 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:20.003210 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:20.003106 1086668 retry.go:31] will retry after 1.001185435s: waiting for machine to come up
	I0318 13:12:21.006470 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:21.006985 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:21.007014 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:21.006947 1086668 retry.go:31] will retry after 987.859668ms: waiting for machine to come up
	I0318 13:12:21.996407 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:21.996997 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:21.997020 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:21.996948 1086668 retry.go:31] will retry after 1.431664028s: waiting for machine to come up
	I0318 13:12:23.430602 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:23.431081 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:23.431108 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:23.431025 1086668 retry.go:31] will retry after 1.676487591s: waiting for machine to come up
	I0318 13:12:25.109912 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:25.110380 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:25.110411 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:25.110339 1086668 retry.go:31] will retry after 2.714530325s: waiting for machine to come up
	I0318 13:12:27.827207 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:27.827685 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:27.827714 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:27.827635 1086668 retry.go:31] will retry after 2.457496431s: waiting for machine to come up
	I0318 13:12:30.287007 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:30.287471 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:30.287544 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:30.287466 1086668 retry.go:31] will retry after 2.869948309s: waiting for machine to come up
	I0318 13:12:33.160830 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:33.161298 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find current IP address of domain ha-942957-m03 in network mk-ha-942957
	I0318 13:12:33.161323 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | I0318 13:12:33.161247 1086668 retry.go:31] will retry after 3.782381909s: waiting for machine to come up
	I0318 13:12:36.944857 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:36.945373 1085975 main.go:141] libmachine: (ha-942957-m03) Found IP for machine: 192.168.39.135
	I0318 13:12:36.945394 1085975 main.go:141] libmachine: (ha-942957-m03) Reserving static IP address...
	I0318 13:12:36.945404 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has current primary IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:36.945940 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | unable to find host DHCP lease matching {name: "ha-942957-m03", mac: "52:54:00:60:e8:43", ip: "192.168.39.135"} in network mk-ha-942957
	I0318 13:12:37.029672 1085975 main.go:141] libmachine: (ha-942957-m03) Reserved static IP address: 192.168.39.135
	I0318 13:12:37.029712 1085975 main.go:141] libmachine: (ha-942957-m03) Waiting for SSH to be available...
	I0318 13:12:37.029723 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Getting to WaitForSSH function...
	I0318 13:12:37.032526 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.032970 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.033008 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.033160 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Using SSH client type: external
	I0318 13:12:37.033193 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa (-rw-------)
	I0318 13:12:37.033223 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:12:37.033238 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | About to run SSH command:
	I0318 13:12:37.033251 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | exit 0
	I0318 13:12:37.156390 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | SSH cmd err, output: <nil>: 
	I0318 13:12:37.156715 1085975 main.go:141] libmachine: (ha-942957-m03) KVM machine creation complete!
	I0318 13:12:37.157048 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetConfigRaw
	I0318 13:12:37.157637 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:37.157871 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:37.158090 1085975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:12:37.158108 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:12:37.159348 1085975 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:12:37.159367 1085975 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:12:37.159376 1085975 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:12:37.159385 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.162153 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.162571 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.162598 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.162723 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.162909 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.163056 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.163185 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.163362 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.163643 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.163659 1085975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:12:37.271746 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:12:37.271779 1085975 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:12:37.271792 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.274733 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.275180 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.275211 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.275380 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.275607 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.275820 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.276004 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.276224 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.276414 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.276428 1085975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:12:37.381137 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:12:37.381244 1085975 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:12:37.381254 1085975 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:12:37.381261 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:37.381553 1085975 buildroot.go:166] provisioning hostname "ha-942957-m03"
	I0318 13:12:37.381591 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:37.381840 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.384755 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.385171 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.385203 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.385390 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.385598 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.385784 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.385958 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.386147 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.386343 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.386359 1085975 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957-m03 && echo "ha-942957-m03" | sudo tee /etc/hostname
	I0318 13:12:37.510570 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957-m03
	
	I0318 13:12:37.510616 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.513983 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.514356 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.514397 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.514658 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:37.514877 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.515089 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:37.515277 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:37.515444 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.515613 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:37.515630 1085975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:12:37.635916 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:12:37.635951 1085975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:12:37.635977 1085975 buildroot.go:174] setting up certificates
	I0318 13:12:37.635993 1085975 provision.go:84] configureAuth start
	I0318 13:12:37.636010 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetMachineName
	I0318 13:12:37.636367 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:37.639710 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.640162 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.640196 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.640427 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:37.643111 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.643538 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:37.643567 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:37.643861 1085975 provision.go:143] copyHostCerts
	I0318 13:12:37.643902 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:12:37.643941 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:12:37.643955 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:12:37.644042 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:12:37.644145 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:12:37.644171 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:12:37.644178 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:12:37.644217 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:12:37.644278 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:12:37.644306 1085975 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:12:37.644315 1085975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:12:37.644348 1085975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:12:37.644416 1085975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957-m03 san=[127.0.0.1 192.168.39.135 ha-942957-m03 localhost minikube]
	I0318 13:12:38.043304 1085975 provision.go:177] copyRemoteCerts
	I0318 13:12:38.043383 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:12:38.043421 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.046406 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.046708 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.046738 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.046959 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.047213 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.047388 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.047567 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:38.130688 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:12:38.130800 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:12:38.160923 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:12:38.161016 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 13:12:38.191115 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:12:38.191210 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:12:38.219429 1085975 provision.go:87] duration metric: took 583.414938ms to configureAuth
	I0318 13:12:38.219470 1085975 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:12:38.219740 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:38.219912 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.222976 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.223443 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.223469 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.223721 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.223980 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.224165 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.224311 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.224514 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:38.224693 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:38.224707 1085975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:12:38.522199 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:12:38.522243 1085975 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:12:38.522256 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetURL
	I0318 13:12:38.524076 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | Using libvirt version 6000000
	I0318 13:12:38.526778 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.527217 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.527253 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.527469 1085975 main.go:141] libmachine: Docker is up and running!
	I0318 13:12:38.527492 1085975 main.go:141] libmachine: Reticulating splines...
	I0318 13:12:38.527501 1085975 client.go:171] duration metric: took 23.129609775s to LocalClient.Create
	I0318 13:12:38.527527 1085975 start.go:167] duration metric: took 23.129689972s to libmachine.API.Create "ha-942957"
	I0318 13:12:38.527545 1085975 start.go:293] postStartSetup for "ha-942957-m03" (driver="kvm2")
	I0318 13:12:38.527562 1085975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:12:38.527587 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.527885 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:12:38.527922 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.530278 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.530649 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.530675 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.530858 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.531033 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.531251 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.531409 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:38.616038 1085975 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:12:38.620973 1085975 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:12:38.621012 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:12:38.621096 1085975 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:12:38.621185 1085975 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:12:38.621197 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:12:38.621290 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:12:38.632111 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:12:38.659511 1085975 start.go:296] duration metric: took 131.944258ms for postStartSetup
	I0318 13:12:38.659586 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetConfigRaw
	I0318 13:12:38.660327 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:38.663448 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.663820 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.663892 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.664232 1085975 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:12:38.664468 1085975 start.go:128] duration metric: took 23.286407971s to createHost
	I0318 13:12:38.664498 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.667126 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.667481 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.667504 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.667636 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.667871 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.668050 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.668211 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.668384 1085975 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:38.668578 1085975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0318 13:12:38.668591 1085975 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:12:38.773377 1085975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767558.753487949
	
	I0318 13:12:38.773407 1085975 fix.go:216] guest clock: 1710767558.753487949
	I0318 13:12:38.773423 1085975 fix.go:229] Guest: 2024-03-18 13:12:38.753487949 +0000 UTC Remote: 2024-03-18 13:12:38.664483321 +0000 UTC m=+167.123361983 (delta=89.004628ms)
	I0318 13:12:38.773447 1085975 fix.go:200] guest clock delta is within tolerance: 89.004628ms
	I0318 13:12:38.773454 1085975 start.go:83] releasing machines lock for "ha-942957-m03", held for 23.395577494s
	I0318 13:12:38.773480 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.773770 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:38.776659 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.777091 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.777124 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.779480 1085975 out.go:177] * Found network options:
	I0318 13:12:38.781030 1085975 out.go:177]   - NO_PROXY=192.168.39.68,192.168.39.22
	W0318 13:12:38.782426 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 13:12:38.782453 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:38.782479 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.783158 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.783397 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:12:38.783534 1085975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:12:38.783579 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	W0318 13:12:38.783623 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 13:12:38.783652 1085975 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:38.783732 1085975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:12:38.783759 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:12:38.786716 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.786841 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.787131 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.787158 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.787187 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:38.787233 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:38.787295 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.787518 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.787521 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:12:38.787708 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.787715 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:12:38.787861 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:12:38.787856 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:38.788045 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:12:39.025721 1085975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:12:39.033191 1085975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:12:39.033276 1085975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:12:39.052390 1085975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:12:39.052432 1085975 start.go:494] detecting cgroup driver to use...
	I0318 13:12:39.052548 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:12:39.069919 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:12:39.084577 1085975 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:12:39.084659 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:12:39.099238 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:12:39.113766 1085975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:12:39.243070 1085975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:12:39.408921 1085975 docker.go:233] disabling docker service ...
	I0318 13:12:39.409020 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:12:39.425742 1085975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:12:39.440652 1085975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:12:39.579646 1085975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:12:39.707442 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:12:39.722635 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:12:39.742783 1085975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:12:39.742855 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.753860 1085975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:12:39.753947 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.764521 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.775149 1085975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:12:39.786262 1085975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:12:39.798772 1085975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:12:39.810435 1085975 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:12:39.810507 1085975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:12:39.824792 1085975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:12:39.836435 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:39.963591 1085975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:12:40.111783 1085975 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:12:40.111881 1085975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:12:40.117244 1085975 start.go:562] Will wait 60s for crictl version
	I0318 13:12:40.117314 1085975 ssh_runner.go:195] Run: which crictl
	I0318 13:12:40.122337 1085975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:12:40.168164 1085975 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:12:40.168269 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:12:40.198928 1085975 ssh_runner.go:195] Run: crio --version
	I0318 13:12:40.232691 1085975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:12:40.234577 1085975 out.go:177]   - env NO_PROXY=192.168.39.68
	I0318 13:12:40.236099 1085975 out.go:177]   - env NO_PROXY=192.168.39.68,192.168.39.22
	I0318 13:12:40.237376 1085975 main.go:141] libmachine: (ha-942957-m03) Calling .GetIP
	I0318 13:12:40.240527 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:40.240941 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:12:40.240971 1085975 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:12:40.241180 1085975 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:12:40.246582 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:40.260287 1085975 mustload.go:65] Loading cluster: ha-942957
	I0318 13:12:40.260681 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:12:40.261094 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:40.261153 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:40.277488 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0318 13:12:40.277970 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:40.278474 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:40.278498 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:40.278874 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:40.279115 1085975 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:12:40.280672 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:12:40.280962 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:40.280996 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:40.295913 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0318 13:12:40.296377 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:40.296927 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:40.296959 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:40.297315 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:40.297534 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:12:40.297752 1085975 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.135
	I0318 13:12:40.297765 1085975 certs.go:194] generating shared ca certs ...
	I0318 13:12:40.297781 1085975 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:40.297917 1085975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:12:40.297952 1085975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:12:40.297961 1085975 certs.go:256] generating profile certs ...
	I0318 13:12:40.298048 1085975 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:12:40.298073 1085975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577
	I0318 13:12:40.298089 1085975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.22 192.168.39.135 192.168.39.254]
	I0318 13:12:40.422797 1085975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577 ...
	I0318 13:12:40.422839 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577: {Name:mk8f2c47f91c4ca227df518f1be79da263f9ffc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:40.423049 1085975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577 ...
	I0318 13:12:40.423065 1085975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577: {Name:mkfb54bc97c141343d32974fffccba1d6d1decf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:40.423167 1085975 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.8c06c577 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:12:40.423300 1085975 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.8c06c577 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:12:40.423429 1085975 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:12:40.423448 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:12:40.423461 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:12:40.423474 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:12:40.423486 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:12:40.423499 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:12:40.423510 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:12:40.423521 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:12:40.423534 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:12:40.423585 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:12:40.423616 1085975 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:12:40.423626 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:12:40.423646 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:12:40.423674 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:12:40.423705 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:12:40.423766 1085975 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:12:40.423808 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:12:40.423853 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:40.423873 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:12:40.423920 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:12:40.427041 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:40.427460 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:12:40.427491 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:40.427711 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:12:40.427993 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:12:40.428168 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:12:40.428296 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:12:40.508173 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 13:12:40.514284 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 13:12:40.526841 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 13:12:40.531667 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 13:12:40.543814 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 13:12:40.548746 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 13:12:40.563064 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 13:12:40.567906 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 13:12:40.584407 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 13:12:40.589804 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 13:12:40.603753 1085975 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 13:12:40.608351 1085975 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 13:12:40.621966 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:12:40.653415 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:12:40.682724 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:12:40.712195 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:12:40.740664 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 13:12:40.768850 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:12:40.797819 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:12:40.825606 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:12:40.856382 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:12:40.885403 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:12:40.914530 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:12:40.945351 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 13:12:40.965155 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 13:12:40.984722 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 13:12:41.004718 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 13:12:41.025176 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 13:12:41.043504 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 13:12:41.062091 1085975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 13:12:41.082769 1085975 ssh_runner.go:195] Run: openssl version
	I0318 13:12:41.089387 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:12:41.101714 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:12:41.106806 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:12:41.106888 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:12:41.113364 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:12:41.125670 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:12:41.137584 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:12:41.142707 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:12:41.142783 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:12:41.149154 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:12:41.160712 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:12:41.173476 1085975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:41.179302 1085975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:41.179395 1085975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:41.186253 1085975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:12:41.198808 1085975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:12:41.203660 1085975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:12:41.203745 1085975 kubeadm.go:928] updating node {m03 192.168.39.135 8443 v1.28.4 crio true true} ...
	I0318 13:12:41.203911 1085975 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:12:41.203947 1085975 kube-vip.go:111] generating kube-vip config ...
	I0318 13:12:41.204009 1085975 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:12:41.224104 1085975 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:12:41.224196 1085975 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:12:41.224264 1085975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:12:41.236822 1085975 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 13:12:41.236908 1085975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 13:12:41.248647 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 13:12:41.248686 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 13:12:41.248705 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:12:41.248715 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:12:41.248647 1085975 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 13:12:41.248799 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:12:41.248807 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 13:12:41.248911 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 13:12:41.264835 1085975 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:12:41.264861 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 13:12:41.264898 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 13:12:41.264925 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 13:12:41.264946 1085975 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 13:12:41.264959 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 13:12:41.274291 1085975 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 13:12:41.274329 1085975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 13:12:42.353556 1085975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 13:12:42.364551 1085975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 13:12:42.385348 1085975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:12:42.405052 1085975 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:12:42.425540 1085975 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:12:42.429995 1085975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:42.443611 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:42.582612 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:12:42.599893 1085975 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:12:42.600249 1085975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:12:42.600305 1085975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:12:42.618027 1085975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I0318 13:12:42.618630 1085975 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:12:42.619404 1085975 main.go:141] libmachine: Using API Version  1
	I0318 13:12:42.619457 1085975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:12:42.619896 1085975 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:12:42.620130 1085975 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:12:42.620347 1085975 start.go:316] joinCluster: &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:12:42.620583 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 13:12:42.620609 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:12:42.624284 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:42.625006 1085975 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:12:42.625043 1085975 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:12:42.625200 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:12:42.625418 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:12:42.625651 1085975 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:12:42.625859 1085975 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:12:42.807169 1085975 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:12:42.807241 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3x4ayw.ucvvy5mdkat71a27 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m03 --control-plane --apiserver-advertise-address=192.168.39.135 --apiserver-bind-port=8443"
	I0318 13:13:10.159544 1085975 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3x4ayw.ucvvy5mdkat71a27 --discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-942957-m03 --control-plane --apiserver-advertise-address=192.168.39.135 --apiserver-bind-port=8443": (27.352266765s)
	I0318 13:13:10.159592 1085975 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 13:13:10.604945 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-942957-m03 minikube.k8s.io/updated_at=2024_03_18T13_13_10_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=ha-942957 minikube.k8s.io/primary=false
	I0318 13:13:10.741208 1085975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-942957-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 13:13:10.900260 1085975 start.go:318] duration metric: took 28.279905586s to joinCluster
	I0318 13:13:10.900369 1085975 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:13:10.902102 1085975 out.go:177] * Verifying Kubernetes components...
	I0318 13:13:10.900822 1085975 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:13:10.903490 1085975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:13:11.138942 1085975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:13:11.155205 1085975 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:13:11.155607 1085975 kapi.go:59] client config for ha-942957: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 13:13:11.155726 1085975 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.68:8443
	I0318 13:13:11.156059 1085975 node_ready.go:35] waiting up to 6m0s for node "ha-942957-m03" to be "Ready" ...
	I0318 13:13:11.156196 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:11.156210 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:11.156221 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:11.156230 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:11.161930 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:11.657266 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:11.657297 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:11.657310 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:11.657315 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:11.661128 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:12.156796 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:12.156830 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:12.156844 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:12.156851 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:12.161145 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:12.656610 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:12.656640 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:12.656649 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:12.656654 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:12.661105 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:13.156693 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:13.156724 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:13.156741 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:13.156748 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:13.160955 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:13.161814 1085975 node_ready.go:53] node "ha-942957-m03" has status "Ready":"False"
	I0318 13:13:13.657166 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:13.657195 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:13.657209 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:13.657215 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:13.661720 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:14.156719 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:14.156743 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:14.156751 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:14.156754 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:14.160867 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:14.656383 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:14.656408 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:14.656417 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:14.656420 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:14.660336 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:15.157169 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:15.157203 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:15.157214 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:15.157221 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:15.161235 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:15.657132 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:15.657167 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:15.657177 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:15.657182 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:15.661801 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:15.662576 1085975 node_ready.go:53] node "ha-942957-m03" has status "Ready":"False"
	I0318 13:13:16.156906 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:16.156933 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:16.156941 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:16.156947 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:16.160966 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:16.657322 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:16.657348 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:16.657357 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:16.657361 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:16.661553 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:17.157004 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:17.157038 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.157047 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.157051 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.168456 1085975 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:13:17.657184 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:17.657212 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.657221 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.657226 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.662171 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:17.663898 1085975 node_ready.go:49] node "ha-942957-m03" has status "Ready":"True"
	I0318 13:13:17.663922 1085975 node_ready.go:38] duration metric: took 6.507835476s for node "ha-942957-m03" to be "Ready" ...
	I0318 13:13:17.663936 1085975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:13:17.664021 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:17.664035 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.664049 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.664065 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.675735 1085975 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:13:17.682451 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.682556 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-f6dtz
	I0318 13:13:17.682572 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.682580 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.682584 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.686432 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.687231 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:17.687248 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.687254 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.687257 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.690826 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.691386 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.691406 1085975 pod_ready.go:81] duration metric: took 8.927182ms for pod "coredns-5dd5756b68-f6dtz" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.691416 1085975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.691482 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pbr9j
	I0318 13:13:17.691491 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.691500 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.691506 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.694987 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.695797 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:17.695815 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.695845 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.695853 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.699574 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:17.700249 1085975 pod_ready.go:92] pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.700273 1085975 pod_ready.go:81] duration metric: took 8.843875ms for pod "coredns-5dd5756b68-pbr9j" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.700289 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.700359 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957
	I0318 13:13:17.700370 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.700382 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.700394 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.705683 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:17.706392 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:17.706414 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.706425 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.706430 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.711865 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:17.712474 1085975 pod_ready.go:92] pod "etcd-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.712502 1085975 pod_ready.go:81] duration metric: took 12.203007ms for pod "etcd-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.712515 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.712611 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m02
	I0318 13:13:17.712625 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.712636 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.712642 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.720930 1085975 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:13:17.721464 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:17.721479 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.721486 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.721491 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.726082 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:17.726897 1085975 pod_ready.go:92] pod "etcd-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:17.726922 1085975 pod_ready.go:81] duration metric: took 14.394384ms for pod "etcd-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.726937 1085975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:17.857286 1085975 request.go:629] Waited for 130.250688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:17.857372 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:17.857384 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:17.857394 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:17.857404 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:17.861861 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:18.057952 1085975 request.go:629] Waited for 195.370725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.058072 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.058084 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.058091 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.058095 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.072442 1085975 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 13:13:18.257653 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:18.257680 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.257689 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.257694 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.261641 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:18.457876 1085975 request.go:629] Waited for 195.042096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.457954 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.457962 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.457972 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.457979 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.461800 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:18.727621 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:18.727653 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.727664 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.727669 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.731404 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:18.857549 1085975 request.go:629] Waited for 125.300467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.857647 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:18.857657 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:18.857665 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:18.857672 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:18.861525 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:19.227177 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/etcd-ha-942957-m03
	I0318 13:13:19.227203 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.227211 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.227216 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.231641 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.257778 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:19.257807 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.257817 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.257822 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.262060 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.262505 1085975 pod_ready.go:92] pod "etcd-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:19.262525 1085975 pod_ready.go:81] duration metric: took 1.53558222s for pod "etcd-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.262542 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.458027 1085975 request.go:629] Waited for 195.382217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957
	I0318 13:13:19.458117 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957
	I0318 13:13:19.458123 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.458131 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.458135 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.462232 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.657717 1085975 request.go:629] Waited for 194.488617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:19.657817 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:19.657823 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.657831 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.657835 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.662050 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:19.662664 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:19.662688 1085975 pod_ready.go:81] duration metric: took 400.138336ms for pod "kube-apiserver-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.662706 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:19.857727 1085975 request.go:629] Waited for 194.92648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m02
	I0318 13:13:19.857808 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m02
	I0318 13:13:19.857813 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:19.857820 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:19.857824 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:19.861878 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.058046 1085975 request.go:629] Waited for 195.187918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:20.058116 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:20.058123 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.058131 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.058135 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.062446 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.063239 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:20.063267 1085975 pod_ready.go:81] duration metric: took 400.548134ms for pod "kube-apiserver-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:20.063279 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:20.257796 1085975 request.go:629] Waited for 194.399218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.257870 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.257877 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.257887 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.257894 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.262526 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.458238 1085975 request.go:629] Waited for 194.523823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.458323 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.458328 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.458336 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.458340 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.462593 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.657309 1085975 request.go:629] Waited for 93.18291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.657377 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:20.657403 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.657411 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.657416 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.661819 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:20.857900 1085975 request.go:629] Waited for 195.374284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.858004 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:20.858025 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:20.858036 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:20.858042 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:20.861507 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:21.064310 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-942957-m03
	I0318 13:13:21.064336 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.064345 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.064350 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.069152 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:21.257610 1085975 request.go:629] Waited for 187.370183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:21.257725 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:21.257735 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.257744 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.257748 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.262045 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:21.262848 1085975 pod_ready.go:92] pod "kube-apiserver-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:21.262870 1085975 pod_ready.go:81] duration metric: took 1.199579409s for pod "kube-apiserver-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.262883 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.457265 1085975 request.go:629] Waited for 194.290228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:13:21.457377 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957
	I0318 13:13:21.457388 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.457395 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.457398 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.461466 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:21.657513 1085975 request.go:629] Waited for 195.116252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:21.657597 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:21.657605 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.657614 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.657622 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.661548 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:21.662076 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:21.662096 1085975 pod_ready.go:81] duration metric: took 399.205425ms for pod "kube-controller-manager-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.662106 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:21.857653 1085975 request.go:629] Waited for 195.429972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:13:21.857754 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m02
	I0318 13:13:21.857766 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:21.857779 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:21.857788 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:21.865119 1085975 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:13:22.057808 1085975 request.go:629] Waited for 191.804251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:22.057872 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:22.057877 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.057884 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.057887 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.062028 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:22.062493 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:22.062510 1085975 pod_ready.go:81] duration metric: took 400.398049ms for pod "kube-controller-manager-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.062527 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.257695 1085975 request.go:629] Waited for 195.083612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m03
	I0318 13:13:22.257762 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-942957-m03
	I0318 13:13:22.257767 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.257776 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.257783 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.262252 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:22.457700 1085975 request.go:629] Waited for 194.385677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:22.457802 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:22.457808 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.457816 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.457821 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.461509 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:22.462220 1085975 pod_ready.go:92] pod "kube-controller-manager-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:22.462244 1085975 pod_ready.go:81] duration metric: took 399.706188ms for pod "kube-controller-manager-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.462259 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.657883 1085975 request.go:629] Waited for 195.518599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:13:22.657973 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97vsd
	I0318 13:13:22.657986 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.657999 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.658011 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.662224 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:22.858251 1085975 request.go:629] Waited for 195.398102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:22.858339 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:22.858352 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:22.858365 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:22.858370 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:22.864419 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:13:22.865369 1085975 pod_ready.go:92] pod "kube-proxy-97vsd" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:22.865391 1085975 pod_ready.go:81] duration metric: took 403.124782ms for pod "kube-proxy-97vsd" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:22.865402 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxtls" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.057505 1085975 request.go:629] Waited for 191.993257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxtls
	I0318 13:13:23.057594 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxtls
	I0318 13:13:23.057606 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.057620 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.057635 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.063204 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:23.257660 1085975 request.go:629] Waited for 193.399419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:23.257725 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:23.257730 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.257737 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.257741 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.262438 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:23.263534 1085975 pod_ready.go:92] pod "kube-proxy-rxtls" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:23.263557 1085975 pod_ready.go:81] duration metric: took 398.149534ms for pod "kube-proxy-rxtls" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.263568 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.457799 1085975 request.go:629] Waited for 194.091973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:13:23.457901 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjmnr
	I0318 13:13:23.457914 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.457925 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.457934 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.463169 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:23.658163 1085975 request.go:629] Waited for 194.39344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:23.658288 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:23.658308 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.658318 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.658326 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.663018 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:23.663699 1085975 pod_ready.go:92] pod "kube-proxy-vjmnr" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:23.663722 1085975 pod_ready.go:81] duration metric: took 400.148512ms for pod "kube-proxy-vjmnr" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.663732 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:23.857854 1085975 request.go:629] Waited for 194.051277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:13:23.857945 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957
	I0318 13:13:23.857951 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:23.857959 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:23.857964 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:23.862153 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:24.057764 1085975 request.go:629] Waited for 194.415091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:24.057874 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957
	I0318 13:13:24.057886 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.057904 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.057911 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.061476 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.062523 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:24.062555 1085975 pod_ready.go:81] duration metric: took 398.815162ms for pod "kube-scheduler-ha-942957" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.062578 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.257289 1085975 request.go:629] Waited for 194.622424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:13:24.257391 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m02
	I0318 13:13:24.257399 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.257413 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.257419 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.261115 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.458272 1085975 request.go:629] Waited for 196.419197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:24.458346 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m02
	I0318 13:13:24.458353 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.458364 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.458371 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.463579 1085975 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:24.464729 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:24.464753 1085975 pod_ready.go:81] duration metric: took 402.166842ms for pod "kube-scheduler-ha-942957-m02" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.464763 1085975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.657242 1085975 request.go:629] Waited for 192.391629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m03
	I0318 13:13:24.657337 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-942957-m03
	I0318 13:13:24.657349 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.657361 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.657372 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.661275 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.858280 1085975 request.go:629] Waited for 196.386855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:24.858944 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes/ha-942957-m03
	I0318 13:13:24.859028 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.859050 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.859065 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.864039 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:24.864721 1085975 pod_ready.go:92] pod "kube-scheduler-ha-942957-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 13:13:24.864747 1085975 pod_ready.go:81] duration metric: took 399.977484ms for pod "kube-scheduler-ha-942957-m03" in "kube-system" namespace to be "Ready" ...
	I0318 13:13:24.864757 1085975 pod_ready.go:38] duration metric: took 7.200805522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:13:24.864774 1085975 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:13:24.864829 1085975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:13:24.887092 1085975 api_server.go:72] duration metric: took 13.986674275s to wait for apiserver process to appear ...
	I0318 13:13:24.887123 1085975 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:13:24.887149 1085975 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0318 13:13:24.892626 1085975 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0318 13:13:24.892714 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/version
	I0318 13:13:24.892722 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:24.892730 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.892736 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:24.893923 1085975 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:13:24.894008 1085975 api_server.go:141] control plane version: v1.28.4
	I0318 13:13:24.894024 1085975 api_server.go:131] duration metric: took 6.894698ms to wait for apiserver health ...
	I0318 13:13:24.894033 1085975 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:13:25.057498 1085975 request.go:629] Waited for 163.36708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.057561 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.057566 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.057573 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.057578 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.064044 1085975 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:13:25.070828 1085975 system_pods.go:59] 24 kube-system pods found
	I0318 13:13:25.070866 1085975 system_pods.go:61] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:13:25.070873 1085975 system_pods.go:61] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:13:25.070878 1085975 system_pods.go:61] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:13:25.070883 1085975 system_pods.go:61] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:13:25.070888 1085975 system_pods.go:61] "etcd-ha-942957-m03" [0ad37fbd-7093-465a-a0d2-9ba364ea4600] Running
	I0318 13:13:25.070892 1085975 system_pods.go:61] "kindnet-4rf6r" [619ed2f9-ed21-43ba-988d-e25959f55fcb] Running
	I0318 13:13:25.070898 1085975 system_pods.go:61] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:13:25.070903 1085975 system_pods.go:61] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:13:25.070907 1085975 system_pods.go:61] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:13:25.070912 1085975 system_pods.go:61] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:13:25.070920 1085975 system_pods.go:61] "kube-apiserver-ha-942957-m03" [c62e4f36-881f-4d6e-b81d-28b250bf0fa4] Running
	I0318 13:13:25.070926 1085975 system_pods.go:61] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:13:25.070935 1085975 system_pods.go:61] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:13:25.070940 1085975 system_pods.go:61] "kube-controller-manager-ha-942957-m03" [4c68f3e5-a122-4f2d-8aa5-5fa9ffdf4ac5] Running
	I0318 13:13:25.070946 1085975 system_pods.go:61] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:13:25.070952 1085975 system_pods.go:61] "kube-proxy-rxtls" [0ac91025-af8e-4f13-8f0c-eae1b7f4d046] Running
	I0318 13:13:25.070957 1085975 system_pods.go:61] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:13:25.070963 1085975 system_pods.go:61] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:13:25.070972 1085975 system_pods.go:61] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:13:25.070978 1085975 system_pods.go:61] "kube-scheduler-ha-942957-m03" [f843b5cb-393e-4890-a188-a750c4571f64] Running
	I0318 13:13:25.070986 1085975 system_pods.go:61] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:13:25.070991 1085975 system_pods.go:61] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:13:25.070996 1085975 system_pods.go:61] "kube-vip-ha-942957-m03" [b461a5fd-5899-4d2f-aff4-ebf58a0c1b97] Running
	I0318 13:13:25.071002 1085975 system_pods.go:61] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:13:25.071012 1085975 system_pods.go:74] duration metric: took 176.971252ms to wait for pod list to return data ...
	I0318 13:13:25.071025 1085975 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:13:25.258160 1085975 request.go:629] Waited for 187.038497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:13:25.258245 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:13:25.258252 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.258263 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.258268 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.262178 1085975 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:25.262345 1085975 default_sa.go:45] found service account: "default"
	I0318 13:13:25.262363 1085975 default_sa.go:55] duration metric: took 191.328533ms for default service account to be created ...
	I0318 13:13:25.262377 1085975 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:13:25.458089 1085975 request.go:629] Waited for 195.609101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.458174 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I0318 13:13:25.458182 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.458192 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.458203 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.466648 1085975 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:13:25.473378 1085975 system_pods.go:86] 24 kube-system pods found
	I0318 13:13:25.473418 1085975 system_pods.go:89] "coredns-5dd5756b68-f6dtz" [78994887-c343-49aa-bc5d-e099da752ad6] Running
	I0318 13:13:25.473426 1085975 system_pods.go:89] "coredns-5dd5756b68-pbr9j" [b011a4b6-807e-4af3-90f5-bc9af8ccd454] Running
	I0318 13:13:25.473434 1085975 system_pods.go:89] "etcd-ha-942957" [e3be3484-ebfd-4409-9209-4ef3b656e8d5] Running
	I0318 13:13:25.473440 1085975 system_pods.go:89] "etcd-ha-942957-m02" [2c328aba-cb1d-4ce7-82d2-ee469be1dea3] Running
	I0318 13:13:25.473445 1085975 system_pods.go:89] "etcd-ha-942957-m03" [0ad37fbd-7093-465a-a0d2-9ba364ea4600] Running
	I0318 13:13:25.473450 1085975 system_pods.go:89] "kindnet-4rf6r" [619ed2f9-ed21-43ba-988d-e25959f55fcb] Running
	I0318 13:13:25.473456 1085975 system_pods.go:89] "kindnet-6rgvl" [eb410475-7c79-4ac1-b7df-a4781100d228] Running
	I0318 13:13:25.473461 1085975 system_pods.go:89] "kindnet-d4smn" [3c9d8fe8-55d9-4682-910f-d2e43efc0a2a] Running
	I0318 13:13:25.473467 1085975 system_pods.go:89] "kube-apiserver-ha-942957" [b0108c9e-26e4-46f5-a1c4-c069eba5b77f] Running
	I0318 13:13:25.473476 1085975 system_pods.go:89] "kube-apiserver-ha-942957-m02" [16270dbb-6afa-4f37-96dc-846a220bfc7b] Running
	I0318 13:13:25.473483 1085975 system_pods.go:89] "kube-apiserver-ha-942957-m03" [c62e4f36-881f-4d6e-b81d-28b250bf0fa4] Running
	I0318 13:13:25.473491 1085975 system_pods.go:89] "kube-controller-manager-ha-942957" [7543e199-eed7-4379-8f21-eb3171cfcfd4] Running
	I0318 13:13:25.473502 1085975 system_pods.go:89] "kube-controller-manager-ha-942957-m02" [dfdb2822-92f0-4146-8ef5-103524b684d4] Running
	I0318 13:13:25.473511 1085975 system_pods.go:89] "kube-controller-manager-ha-942957-m03" [4c68f3e5-a122-4f2d-8aa5-5fa9ffdf4ac5] Running
	I0318 13:13:25.473524 1085975 system_pods.go:89] "kube-proxy-97vsd" [a4d03704-5a4b-4973-b178-912218d00802] Running
	I0318 13:13:25.473531 1085975 system_pods.go:89] "kube-proxy-rxtls" [0ac91025-af8e-4f13-8f0c-eae1b7f4d046] Running
	I0318 13:13:25.473537 1085975 system_pods.go:89] "kube-proxy-vjmnr" [e7dac65a-80b9-4e01-b4b0-10222991b604] Running
	I0318 13:13:25.473547 1085975 system_pods.go:89] "kube-scheduler-ha-942957" [125e01b5-776d-43ef-ac0e-3e21693cee59] Running
	I0318 13:13:25.473557 1085975 system_pods.go:89] "kube-scheduler-ha-942957-m02" [8ca9c332-c8ca-4991-955d-7fc4d0939fd0] Running
	I0318 13:13:25.473572 1085975 system_pods.go:89] "kube-scheduler-ha-942957-m03" [f843b5cb-393e-4890-a188-a750c4571f64] Running
	I0318 13:13:25.473579 1085975 system_pods.go:89] "kube-vip-ha-942957" [731b23dc-6b59-4ffb-bf5b-c79279c55d75] Running
	I0318 13:13:25.473586 1085975 system_pods.go:89] "kube-vip-ha-942957-m02" [85b36617-81b8-446c-967c-f3c0c60d3926] Running
	I0318 13:13:25.473592 1085975 system_pods.go:89] "kube-vip-ha-942957-m03" [b461a5fd-5899-4d2f-aff4-ebf58a0c1b97] Running
	I0318 13:13:25.473599 1085975 system_pods.go:89] "storage-provisioner" [b67e544b-41f2-4be4-90ed-971378c82a76] Running
	I0318 13:13:25.473610 1085975 system_pods.go:126] duration metric: took 211.224055ms to wait for k8s-apps to be running ...
	I0318 13:13:25.473623 1085975 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:13:25.473697 1085975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:13:25.492440 1085975 system_svc.go:56] duration metric: took 18.802632ms WaitForService to wait for kubelet
	I0318 13:13:25.492475 1085975 kubeadm.go:576] duration metric: took 14.59206562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:13:25.492500 1085975 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:13:25.658012 1085975 request.go:629] Waited for 165.409924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I0318 13:13:25.658110 1085975 round_trippers.go:463] GET https://192.168.39.68:8443/api/v1/nodes
	I0318 13:13:25.658118 1085975 round_trippers.go:469] Request Headers:
	I0318 13:13:25.658135 1085975 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.658146 1085975 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 13:13:25.662240 1085975 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:25.663543 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:13:25.663564 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:13:25.663596 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:13:25.663600 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:13:25.663604 1085975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:13:25.663607 1085975 node_conditions.go:123] node cpu capacity is 2
	I0318 13:13:25.663611 1085975 node_conditions.go:105] duration metric: took 171.103188ms to run NodePressure ...
	I0318 13:13:25.663625 1085975 start.go:240] waiting for startup goroutines ...
	I0318 13:13:25.663646 1085975 start.go:254] writing updated cluster config ...
	I0318 13:13:25.663976 1085975 ssh_runner.go:195] Run: rm -f paused
	I0318 13:13:25.722800 1085975 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:13:25.726143 1085975 out.go:177] * Done! kubectl is now configured to use "ha-942957" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.444836128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767883444808322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d8f94c4-af5b-44c3-9276-0ba7dc61300d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.445345858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68009e9f-5269-4826-8d34-4b6a3a08ff91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.445424770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68009e9f-5269-4826-8d34-4b6a3a08ff91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.445788346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68009e9f-5269-4826-8d34-4b6a3a08ff91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.505869107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=682d7038-befd-4137-9213-ec6303704903 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.505965583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=682d7038-befd-4137-9213-ec6303704903 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.507279318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0c13c71-0241-4583-bfae-335963d16d1e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.507932756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767883507899161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0c13c71-0241-4583-bfae-335963d16d1e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.508486174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=573126e4-3b03-4fcf-8a4e-5f483bf6105e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.508564463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=573126e4-3b03-4fcf-8a4e-5f483bf6105e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.511900487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=573126e4-3b03-4fcf-8a4e-5f483bf6105e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.558239924Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75e6d00f-7afc-48ed-a778-63e42abb734f name=/runtime.v1.RuntimeService/Version
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.558332051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75e6d00f-7afc-48ed-a778-63e42abb734f name=/runtime.v1.RuntimeService/Version
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.560571728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cd59261-3c98-47c2-9ef5-af112c898e81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.561073575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767883561047928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cd59261-3c98-47c2-9ef5-af112c898e81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.562378669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c400ce11-59ae-47e7-9ebf-08f2ed7fe31a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.562594295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c400ce11-59ae-47e7-9ebf-08f2ed7fe31a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.563160400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c400ce11-59ae-47e7-9ebf-08f2ed7fe31a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.612927247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60f745aa-2780-4d1c-a312-d50b2737969a name=/runtime.v1.RuntimeService/Version
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.613020357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60f745aa-2780-4d1c-a312-d50b2737969a name=/runtime.v1.RuntimeService/Version
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.614057763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=445b14fa-bef3-4a11-9020-e65c1a676789 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.614937564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767883614465919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=445b14fa-bef3-4a11-9020-e65c1a676789 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.615570643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9d05d4a-9dd4-4fc3-814c-d2142286105b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.615624302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9d05d4a-9dd4-4fc3-814c-d2142286105b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:18:03 ha-942957 crio[674]: time="2024-03-18 13:18:03.615947432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767609255370699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710767516453801104,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fd6f28e018805c51b58b9f0084b4e15205294268eccc8e62b08ba21552f6f37,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767515456714736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:488ea7fc9ea1fc12da454e30b56509e140cafa5f8321f6441012b164da06dc06,PodSandboxId:5fcb429680aac4c0c36e698048aaca9231a9acf64fe8d5a662cdcc4c657120f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767454288980537,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297437118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767454297819371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]
string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9,PodSandboxId:abdb2ce8343b165bfb2de788ac1742c8fff0ed0340f5d996117716b40f3e208a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767452599377884,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767450094181870,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191b8657592054e01f5a5c2b65956fed40ddb87b4aa2adfbf9dfa4cbfcade00,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767431324557459,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767428281487117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1,PodSandboxId:67ed649bec722b75c8665a2b23ba9b84394ec9daa284ecd7024b445b8544a33f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767428186434711,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242,PodSandboxId:c0a1a03e46a5503ef7ffcb6ba6895567c59b47e07556df830e861e358e675b8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767428118703376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767428135776611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9d05d4a-9dd4-4fc3-814c-d2142286105b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bc6f97ca3edce       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   a2d21119e214a       busybox-5b5d89c9d6-h4q2t
	3084769e1ff80       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   750ec46160c5a       kube-vip-ha-942957
	4fd6f28e01880       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   5fcb429680aac       storage-provisioner
	c859be2ef6bde       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   0b6911927b37f       coredns-5dd5756b68-f6dtz
	e2cf377b129d8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   3daf97324e58a       coredns-5dd5756b68-pbr9j
	488ea7fc9ea1f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   5fcb429680aac       storage-provisioner
	3a01c2a33ecf6       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   abdb2ce8343b1       kindnet-6rgvl
	11bc6358bf6d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   c4b520f79bf4b       kube-proxy-97vsd
	3191b86575920       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Exited              kube-vip                  0                   750ec46160c5a       kube-vip-ha-942957
	09364d1b0b8ec       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago       Running             kube-scheduler            0                   6e0049bc30922       kube-scheduler-ha-942957
	829af6255f575       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago       Running             kube-controller-manager   0                   67ed649bec722       kube-controller-manager-ha-942957
	ac909d1fea8aa       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago       Running             etcd                      0                   c9e7a1111cb30       etcd-ha-942957
	ff86796bcd151       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago       Running             kube-apiserver            0                   c0a1a03e46a55       kube-apiserver-ha-942957
	
	
	==> coredns [c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67] <==
	[INFO] 127.0.0.1:50477 - 3303 "HINFO IN 7694853832209238896.6872666870011795296. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023717101s
	[INFO] 10.244.1.2:48458 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004729471s
	[INFO] 10.244.2.2:47528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307517s
	[INFO] 10.244.2.2:48138 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000123939s
	[INFO] 10.244.0.4:40261 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000082467s
	[INFO] 10.244.0.4:59741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000711436s
	[INFO] 10.244.1.2:33325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003796173s
	[INFO] 10.244.1.2:40118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184538s
	[INFO] 10.244.1.2:38695 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158047s
	[INFO] 10.244.2.2:39278 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539379s
	[INFO] 10.244.2.2:48574 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165918s
	[INFO] 10.244.0.4:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.0.4:50001 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135799s
	[INFO] 10.244.0.4:49373 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159584s
	[INFO] 10.244.1.2:44441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118463s
	[INFO] 10.244.2.2:42552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221661s
	[INFO] 10.244.2.2:46062 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090758s
	[INFO] 10.244.0.4:53179 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092569s
	[INFO] 10.244.1.2:45351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128077s
	[INFO] 10.244.1.2:52758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144551s
	[INFO] 10.244.1.2:47551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000203433s
	[INFO] 10.244.2.2:53980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115616s
	[INFO] 10.244.2.2:55318 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181469s
	[INFO] 10.244.0.4:60630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069346s
	[INFO] 10.244.0.4:41251 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040242s
	
	
	==> coredns [e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945] <==
	[INFO] 10.244.1.2:53196 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146942s
	[INFO] 10.244.2.2:41632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159168s
	[INFO] 10.244.2.2:46720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002275748s
	[INFO] 10.244.2.2:50733 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275044s
	[INFO] 10.244.2.2:37004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138849s
	[INFO] 10.244.2.2:33563 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224767s
	[INFO] 10.244.2.2:42566 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017421s
	[INFO] 10.244.0.4:54486 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00168008s
	[INFO] 10.244.0.4:46746 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363608s
	[INFO] 10.244.0.4:38530 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231105s
	[INFO] 10.244.0.4:47152 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045351s
	[INFO] 10.244.0.4:57247 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070307s
	[INFO] 10.244.1.2:43996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140398s
	[INFO] 10.244.1.2:36237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220389s
	[INFO] 10.244.1.2:37302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111738s
	[INFO] 10.244.2.2:58342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134629s
	[INFO] 10.244.2.2:43645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160061s
	[INFO] 10.244.0.4:58375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210567s
	[INFO] 10.244.0.4:50302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075795s
	[INFO] 10.244.0.4:46012 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084361s
	[INFO] 10.244.1.2:37085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000242114s
	[INFO] 10.244.2.2:47856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192734s
	[INFO] 10.244.2.2:42553 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213437s
	[INFO] 10.244.0.4:53951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102273s
	[INFO] 10.244.0.4:44758 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071111s
	
	
	==> describe nodes <==
	Name:               ha-942957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_10_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:10:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:18:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:13:45 +0000   Mon, 18 Mar 2024 13:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-942957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 98d7d2d7e6f44e39a7470fa399e42587
	  System UUID:                98d7d2d7-e6f4-4e39-a747-0fa399e42587
	  Boot ID:                    8d77322f-23ab-4abb-a476-3a13d0f588c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h4q2t             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 coredns-5dd5756b68-f6dtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 coredns-5dd5756b68-pbr9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 etcd-ha-942957                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m28s
	  kube-system                 kindnet-6rgvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m15s
	  kube-system                 kube-apiserver-ha-942957             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-controller-manager-ha-942957    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-proxy-97vsd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-ha-942957             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-vip-ha-942957                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m13s  kube-proxy       
	  Normal  Starting                 7m29s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s  kubelet          Node ha-942957 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s  kubelet          Node ha-942957 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s  kubelet          Node ha-942957 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m16s  node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal  NodeReady                7m10s  kubelet          Node ha-942957 status is now: NodeReady
	  Normal  RegisteredNode           5m50s  node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal  RegisteredNode           4m39s  node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	
	
	Name:               ha-942957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_12_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:11:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:14:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 13:13:52 +0000   Mon, 18 Mar 2024 13:15:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-942957-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 effa4806d9ac4aae93234a5f4797b41e
	  System UUID:                effa4806-d9ac-4aae-9323-4a5f4797b41e
	  Boot ID:                    7603b2ca-1020-4fd8-bd7f-eeda8ad1e754
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9qmdx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-942957-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m20s
	  kube-system                 kindnet-d4smn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	  kube-system                 kube-apiserver-ha-942957-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-controller-manager-ha-942957-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-vjmnr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-scheduler-ha-942957-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-vip-ha-942957-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m1s   kube-proxy       
	  Normal  RegisteredNode  5m51s  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode  4m40s  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  NodeNotReady    2m50s  node-controller  Node ha-942957-m02 status is now: NodeNotReady
	
	
	Name:               ha-942957-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_13_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:18:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:13:37 +0000   Mon, 18 Mar 2024 13:13:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    ha-942957-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec2118c8153b4c20b6861bbdce99bda8
	  System UUID:                ec2118c8-153b-4c20-b686-1bbdce99bda8
	  Boot ID:                    456dbdb3-b214-42f6-9f4d-35edec402cf9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-b64gc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-942957-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-4rf6r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-ha-942957-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-ha-942957-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-rxtls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-ha-942957-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-vip-ha-942957-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m55s  kube-proxy       
	  Normal  RegisteredNode  4m57s  node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal  RegisteredNode  4m56s  node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal  RegisteredNode  4m40s  node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	
	
	Name:               ha-942957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_14_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:14:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:18:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:14:39 +0000   Mon, 18 Mar 2024 13:14:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-942957-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16089a645be4a78a5280af4bb880ea8
	  System UUID:                b16089a6-45be-4a78-a528-0af4bb880ea8
	  Boot ID:                    61da23d5-a659-44da-b851-b354c3ec0a4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g4lxl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m56s
	  kube-system                 kube-proxy-gjnnp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m56s (x5 over 3m58s)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x5 over 3m58s)  kubelet          Node ha-942957-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x5 over 3m58s)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node ha-942957-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar18 13:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042391] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541459] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Mar18 13:10] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.634426] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.300912] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.067435] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059503] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.165737] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136769] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.243119] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.843891] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.062146] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.956739] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.288333] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.601273] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.093539] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.596662] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.054967] kauditd_printk_skb: 53 callbacks suppressed
	[Mar18 13:11] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7] <==
	{"level":"warn","ts":"2024-03-18T13:18:03.803217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.902769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.923307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.936852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.941391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.952872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.962109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.969402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.97361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.976848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.989568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:03.996732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.003539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.004749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.009893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.013526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.025016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.031586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.037932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.041782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.048343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.05917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.066338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.073821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T13:18:04.104629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"821abe7be15f44a3","from":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:18:04 up 8 min,  0 users,  load average: 0.23, 0.33, 0.20
	Linux ha-942957 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9] <==
	I0318 13:17:29.817346       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:17:39.833526       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:17:39.833626       1 main.go:227] handling current node
	I0318 13:17:39.833748       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:17:39.833779       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:17:39.833941       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:17:39.833972       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:17:39.834042       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:17:39.834467       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:17:49.845768       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:17:49.845852       1 main.go:227] handling current node
	I0318 13:17:49.845876       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:17:49.845893       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:17:49.846063       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:17:49.846084       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:17:49.846149       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:17:49.846167       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:17:59.857420       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:17:59.857503       1 main.go:227] handling current node
	I0318 13:17:59.857526       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:17:59.857543       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:17:59.857768       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:17:59.857806       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:17:59.857884       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:17:59.857903       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242] <==
	I0318 13:11:59.275888       1 trace.go:236] Trace[1922427888]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a35392cf-5b77-4800-bbe2-098cb914fa85,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-942957,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 13:11:53.920) (total time: 5355ms):
	Trace[1922427888]: ["GuaranteedUpdate etcd3" audit-id:a35392cf-5b77-4800-bbe2-098cb914fa85,key:/leases/kube-node-lease/ha-942957,type:*coordination.Lease,resource:leases.coordination.k8s.io 5355ms (13:11:53.920)
	Trace[1922427888]:  ---"Txn call completed" 5354ms (13:11:59.275)]
	Trace[1922427888]: [5.355567346s] [5.355567346s] END
	I0318 13:11:59.276140       1 trace.go:236] Trace[760804851]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:05d413b6-6c2c-4bfc-b063-8f6dc2192f21,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-bvovfeqgqy4akpxvecqne7xhka,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 13:11:53.342) (total time: 5933ms):
	Trace[760804851]: ["GuaranteedUpdate etcd3" audit-id:05d413b6-6c2c-4bfc-b063-8f6dc2192f21,key:/leases/kube-system/apiserver-bvovfeqgqy4akpxvecqne7xhka,type:*coordination.Lease,resource:leases.coordination.k8s.io 5933ms (13:11:53.342)
	Trace[760804851]:  ---"Txn call completed" 5932ms (13:11:59.276)]
	Trace[760804851]: [5.933480347s] [5.933480347s] END
	I0318 13:11:59.276443       1 trace.go:236] Trace[1340811478]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:c1a47ab2-169f-4e2d-b4ae-46a43d2ed2a9,client:192.168.39.68,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-942957-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (18-Mar-2024 13:11:57.576) (total time: 1699ms):
	Trace[1340811478]: ["GuaranteedUpdate etcd3" audit-id:c1a47ab2-169f-4e2d-b4ae-46a43d2ed2a9,key:/minions/ha-942957-m02,type:*core.Node,resource:nodes 1698ms (13:11:57.577)
	Trace[1340811478]:  ---"Txn call completed" 1693ms (13:11:59.272)]
	Trace[1340811478]: ---"About to apply patch" 1694ms (13:11:59.272)
	Trace[1340811478]: [1.699413207s] [1.699413207s] END
	I0318 13:11:59.316127       1 trace.go:236] Trace[411801182]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:908e867f-8aac-40f7-b9fe-590307c5397c,client:192.168.39.22,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:11:54.041) (total time: 5274ms):
	Trace[411801182]: [5.27471743s] [5.27471743s] END
	I0318 13:11:59.321132       1 trace.go:236] Trace[513579136]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:22e0975e-3f6d-4dc9-9154-9600cfc3e415,client:192.168.39.22,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:11:53.038) (total time: 6282ms):
	Trace[513579136]: [6.282712075s] [6.282712075s] END
	I0318 13:11:59.324401       1 trace.go:236] Trace[664548839]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4a4bb367-da40-41c5-8033-86e3ec397d2d,client:192.168.39.22,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:11:52.030) (total time: 7294ms):
	Trace[664548839]: [7.294334356s] [7.294334356s] END
	I0318 13:14:09.515140       1 trace.go:236] Trace[215917531]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2ac5b2dc-04bb-41a0-8259-194f167bd578,client:192.168.39.221,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:14:08.822) (total time: 692ms):
	Trace[215917531]: ---"Write to database call succeeded" len:145 692ms (13:14:09.514)
	Trace[215917531]: [692.580604ms] [692.580604ms] END
	I0318 13:14:09.519469       1 trace.go:236] Trace[1808083093]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1db19c4f-3835-4305-bf1a-126052ca1a0e,client:192.168.39.221,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 13:14:08.823) (total time: 695ms):
	Trace[1808083093]: ---"Write to database call succeeded" len:148 695ms (13:14:09.519)
	Trace[1808083093]: [695.996137ms] [695.996137ms] END
	
	
	==> kube-controller-manager [829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1] <==
	I0318 13:13:27.216129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="139.844µs"
	I0318 13:13:27.328104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.08951ms"
	I0318 13:13:27.328338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.875µs"
	I0318 13:13:29.457159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.166694ms"
	I0318 13:13:29.457758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="242.285µs"
	I0318 13:13:29.942857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.942457ms"
	I0318 13:13:29.961459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.518404ms"
	I0318 13:13:29.962867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="506.786µs"
	I0318 13:13:30.055943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.79244ms"
	I0318 13:13:30.058422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="103.191µs"
	E0318 13:14:06.707244       1 certificate_controller.go:146] Sync csr-rzs6k failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-rzs6k": the object has been modified; please apply your changes to the latest version and try again
	E0318 13:14:06.718825       1 certificate_controller.go:146] Sync csr-rzs6k failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-rzs6k": the object has been modified; please apply your changes to the latest version and try again
	I0318 13:14:08.228253       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-942957-m04\" does not exist"
	I0318 13:14:08.250180       1 range_allocator.go:380] "Set node PodCIDR" node="ha-942957-m04" podCIDRs=["10.244.3.0/24"]
	I0318 13:14:08.284345       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tvmh7"
	I0318 13:14:08.284730       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g4lxl"
	I0318 13:14:08.459903       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-84mtv"
	I0318 13:14:08.468335       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-z2gzx"
	I0318 13:14:08.563970       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-fg4h5"
	I0318 13:14:12.498955       1 event.go:307] "Event occurred" object="ha-942957-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller"
	I0318 13:14:12.525622       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-942957-m04"
	I0318 13:14:17.193148       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-942957-m04"
	I0318 13:15:14.771604       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-942957-m04"
	I0318 13:15:14.884523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.139417ms"
	I0318 13:15:14.884721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="113.102µs"
	
	
	==> kube-proxy [11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1] <==
	I0318 13:10:50.316906       1 server_others.go:69] "Using iptables proxy"
	I0318 13:10:50.334356       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0318 13:10:50.377482       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:10:50.377528       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:10:50.380218       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:10:50.380333       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:10:50.380556       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:10:50.380608       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:50.382751       1 config.go:188] "Starting service config controller"
	I0318 13:10:50.383144       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:10:50.383193       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:10:50.383198       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:10:50.384291       1 config.go:315] "Starting node config controller"
	I0318 13:10:50.384323       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:10:50.483809       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:10:50.483940       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:10:50.484417       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99] <==
	W0318 13:10:32.689633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:10:32.689805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:10:32.784751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:10:32.784934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:10:32.818989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:10:32.819207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:10:32.870024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:10:32.870081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:10:32.880608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:10:32.881010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:10:32.893396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:10:32.893502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:10:33.002949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:10:33.003142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:10:33.017021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:10:33.017071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:34.299125       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 13:13:07.299612       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g8dzj\": pod kube-proxy-g8dzj is already assigned to node \"ha-942957-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g8dzj" node="ha-942957-m03"
	E0318 13:13:07.300066       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 840bd016-9a33-4eea-90ed-324f143b9dac(kube-system/kube-proxy-g8dzj) wasn't assumed so cannot be forgotten"
	E0318 13:13:07.300226       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g8dzj\": pod kube-proxy-g8dzj is already assigned to node \"ha-942957-m03\"" pod="kube-system/kube-proxy-g8dzj"
	I0318 13:13:07.300450       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-g8dzj" node="ha-942957-m03"
	E0318 13:14:08.326079       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-g4lxl\": pod kindnet-g4lxl is already assigned to node \"ha-942957-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-g4lxl" node="ha-942957-m04"
	E0318 13:14:08.328913       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 2797cae1-24be-4e84-a8ee-39572432d9b6(kube-system/kindnet-g4lxl) wasn't assumed so cannot be forgotten"
	E0318 13:14:08.329041       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-g4lxl\": pod kindnet-g4lxl is already assigned to node \"ha-942957-m04\"" pod="kube-system/kindnet-g4lxl"
	I0318 13:14:08.329098       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-g4lxl" node="ha-942957-m04"
	
	
	==> kubelet <==
	Mar 18 13:13:35 ha-942957 kubelet[1368]: E0318 13:13:35.068304    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:13:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:13:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:13:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:13:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:14:35 ha-942957 kubelet[1368]: E0318 13:14:35.069918    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:14:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:14:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:14:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:14:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:15:35 ha-942957 kubelet[1368]: E0318 13:15:35.071131    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:15:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:15:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:15:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:15:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:16:35 ha-942957 kubelet[1368]: E0318 13:16:35.067730    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:16:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:16:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:16:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:16:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:17:35 ha-942957 kubelet[1368]: E0318 13:17:35.066821    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:17:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:17:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:17:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:17:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-942957 -n ha-942957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-942957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-942957 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-942957 -v=7 --alsologtostderr
E0318 13:19:17.918463 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:19:45.602139 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-942957 -v=7 --alsologtostderr: exit status 82 (2m2.735694511s)

                                                
                                                
-- stdout --
	* Stopping node "ha-942957-m04"  ...
	* Stopping node "ha-942957-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:18:05.641379 1091769 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:18:05.641554 1091769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:18:05.641569 1091769 out.go:304] Setting ErrFile to fd 2...
	I0318 13:18:05.641576 1091769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:18:05.641792 1091769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:18:05.642081 1091769 out.go:298] Setting JSON to false
	I0318 13:18:05.642181 1091769 mustload.go:65] Loading cluster: ha-942957
	I0318 13:18:05.642546 1091769 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:18:05.642641 1091769 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:18:05.642832 1091769 mustload.go:65] Loading cluster: ha-942957
	I0318 13:18:05.642961 1091769 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:18:05.642987 1091769 stop.go:39] StopHost: ha-942957-m04
	I0318 13:18:05.643370 1091769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:05.643413 1091769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:05.661154 1091769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
	I0318 13:18:05.661744 1091769 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:05.662485 1091769 main.go:141] libmachine: Using API Version  1
	I0318 13:18:05.662517 1091769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:05.662864 1091769 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:05.665243 1091769 out.go:177] * Stopping node "ha-942957-m04"  ...
	I0318 13:18:05.666952 1091769 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:18:05.666995 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:18:05.667280 1091769 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:18:05.667312 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:18:05.670505 1091769 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:18:05.670965 1091769 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:13:50 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:18:05.670999 1091769 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:18:05.671175 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:18:05.671321 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:18:05.671522 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:18:05.671661 1091769 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:18:05.755466 1091769 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:18:05.809403 1091769 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:18:05.863605 1091769 main.go:141] libmachine: Stopping "ha-942957-m04"...
	I0318 13:18:05.863663 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:18:05.865353 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .Stop
	I0318 13:18:05.869025 1091769 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 0/120
	I0318 13:18:06.870452 1091769 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 1/120
	I0318 13:18:07.873103 1091769 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:18:07.874393 1091769 main.go:141] libmachine: Machine "ha-942957-m04" was stopped.
	I0318 13:18:07.874415 1091769 stop.go:75] duration metric: took 2.20748851s to stop
	I0318 13:18:07.874441 1091769 stop.go:39] StopHost: ha-942957-m03
	I0318 13:18:07.874885 1091769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:18:07.874934 1091769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:18:07.891091 1091769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0318 13:18:07.891634 1091769 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:18:07.892234 1091769 main.go:141] libmachine: Using API Version  1
	I0318 13:18:07.892262 1091769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:18:07.892696 1091769 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:18:07.894775 1091769 out.go:177] * Stopping node "ha-942957-m03"  ...
	I0318 13:18:07.895969 1091769 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:18:07.896001 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .DriverName
	I0318 13:18:07.896310 1091769 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:18:07.896343 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHHostname
	I0318 13:18:07.899535 1091769 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:18:07.900113 1091769 main.go:141] libmachine: (ha-942957-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e8:43", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:30 +0000 UTC Type:0 Mac:52:54:00:60:e8:43 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-942957-m03 Clientid:01:52:54:00:60:e8:43}
	I0318 13:18:07.900147 1091769 main.go:141] libmachine: (ha-942957-m03) DBG | domain ha-942957-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:60:e8:43 in network mk-ha-942957
	I0318 13:18:07.900299 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHPort
	I0318 13:18:07.900495 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHKeyPath
	I0318 13:18:07.900640 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .GetSSHUsername
	I0318 13:18:07.900785 1091769 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m03/id_rsa Username:docker}
	I0318 13:18:07.984145 1091769 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:18:08.039997 1091769 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:18:08.097770 1091769 main.go:141] libmachine: Stopping "ha-942957-m03"...
	I0318 13:18:08.097801 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .GetState
	I0318 13:18:08.099448 1091769 main.go:141] libmachine: (ha-942957-m03) Calling .Stop
	I0318 13:18:08.103025 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 0/120
	I0318 13:18:09.104529 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 1/120
	I0318 13:18:10.106158 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 2/120
	I0318 13:18:11.108597 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 3/120
	I0318 13:18:12.110486 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 4/120
	I0318 13:18:13.112799 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 5/120
	I0318 13:18:14.114447 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 6/120
	I0318 13:18:15.116268 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 7/120
	I0318 13:18:16.118389 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 8/120
	I0318 13:18:17.120208 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 9/120
	I0318 13:18:18.122544 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 10/120
	I0318 13:18:19.124020 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 11/120
	I0318 13:18:20.126872 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 12/120
	I0318 13:18:21.128546 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 13/120
	I0318 13:18:22.130275 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 14/120
	I0318 13:18:23.132474 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 15/120
	I0318 13:18:24.133967 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 16/120
	I0318 13:18:25.135721 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 17/120
	I0318 13:18:26.137077 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 18/120
	I0318 13:18:27.138969 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 19/120
	I0318 13:18:28.140961 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 20/120
	I0318 13:18:29.143028 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 21/120
	I0318 13:18:30.144709 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 22/120
	I0318 13:18:31.146267 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 23/120
	I0318 13:18:32.147953 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 24/120
	I0318 13:18:33.150112 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 25/120
	I0318 13:18:34.151786 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 26/120
	I0318 13:18:35.153239 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 27/120
	I0318 13:18:36.154936 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 28/120
	I0318 13:18:37.156613 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 29/120
	I0318 13:18:38.158614 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 30/120
	I0318 13:18:39.160677 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 31/120
	I0318 13:18:40.162566 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 32/120
	I0318 13:18:41.164265 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 33/120
	I0318 13:18:42.165904 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 34/120
	I0318 13:18:43.167174 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 35/120
	I0318 13:18:44.168667 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 36/120
	I0318 13:18:45.170271 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 37/120
	I0318 13:18:46.171724 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 38/120
	I0318 13:18:47.173274 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 39/120
	I0318 13:18:48.175410 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 40/120
	I0318 13:18:49.177099 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 41/120
	I0318 13:18:50.178666 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 42/120
	I0318 13:18:51.180269 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 43/120
	I0318 13:18:52.182620 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 44/120
	I0318 13:18:53.184643 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 45/120
	I0318 13:18:54.186023 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 46/120
	I0318 13:18:55.187675 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 47/120
	I0318 13:18:56.189298 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 48/120
	I0318 13:18:57.190599 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 49/120
	I0318 13:18:58.192487 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 50/120
	I0318 13:18:59.194435 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 51/120
	I0318 13:19:00.195912 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 52/120
	I0318 13:19:01.197474 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 53/120
	I0318 13:19:02.199200 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 54/120
	I0318 13:19:03.201378 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 55/120
	I0318 13:19:04.202926 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 56/120
	I0318 13:19:05.204805 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 57/120
	I0318 13:19:06.206722 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 58/120
	I0318 13:19:07.208285 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 59/120
	I0318 13:19:08.210088 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 60/120
	I0318 13:19:09.211526 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 61/120
	I0318 13:19:10.212954 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 62/120
	I0318 13:19:11.214315 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 63/120
	I0318 13:19:12.215716 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 64/120
	I0318 13:19:13.217430 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 65/120
	I0318 13:19:14.218895 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 66/120
	I0318 13:19:15.220275 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 67/120
	I0318 13:19:16.221804 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 68/120
	I0318 13:19:17.223612 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 69/120
	I0318 13:19:18.225997 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 70/120
	I0318 13:19:19.227365 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 71/120
	I0318 13:19:20.228924 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 72/120
	I0318 13:19:21.230426 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 73/120
	I0318 13:19:22.231898 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 74/120
	I0318 13:19:23.234037 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 75/120
	I0318 13:19:24.235494 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 76/120
	I0318 13:19:25.236925 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 77/120
	I0318 13:19:26.238388 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 78/120
	I0318 13:19:27.239741 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 79/120
	I0318 13:19:28.241426 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 80/120
	I0318 13:19:29.242838 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 81/120
	I0318 13:19:30.244346 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 82/120
	I0318 13:19:31.245753 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 83/120
	I0318 13:19:32.247196 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 84/120
	I0318 13:19:33.248883 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 85/120
	I0318 13:19:34.250581 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 86/120
	I0318 13:19:35.252165 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 87/120
	I0318 13:19:36.253528 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 88/120
	I0318 13:19:37.255111 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 89/120
	I0318 13:19:38.257498 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 90/120
	I0318 13:19:39.258916 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 91/120
	I0318 13:19:40.260527 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 92/120
	I0318 13:19:41.262133 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 93/120
	I0318 13:19:42.263489 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 94/120
	I0318 13:19:43.264924 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 95/120
	I0318 13:19:44.266286 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 96/120
	I0318 13:19:45.267593 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 97/120
	I0318 13:19:46.269084 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 98/120
	I0318 13:19:47.270609 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 99/120
	I0318 13:19:48.272570 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 100/120
	I0318 13:19:49.274007 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 101/120
	I0318 13:19:50.276495 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 102/120
	I0318 13:19:51.278169 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 103/120
	I0318 13:19:52.280063 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 104/120
	I0318 13:19:53.281691 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 105/120
	I0318 13:19:54.283153 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 106/120
	I0318 13:19:55.284586 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 107/120
	I0318 13:19:56.286014 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 108/120
	I0318 13:19:57.287431 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 109/120
	I0318 13:19:58.288919 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 110/120
	I0318 13:19:59.290302 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 111/120
	I0318 13:20:00.292387 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 112/120
	I0318 13:20:01.294318 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 113/120
	I0318 13:20:02.295810 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 114/120
	I0318 13:20:03.297725 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 115/120
	I0318 13:20:04.299133 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 116/120
	I0318 13:20:05.300587 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 117/120
	I0318 13:20:06.302943 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 118/120
	I0318 13:20:07.304356 1091769 main.go:141] libmachine: (ha-942957-m03) Waiting for machine to stop 119/120
	I0318 13:20:08.305420 1091769 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 13:20:08.305488 1091769 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 13:20:08.307720 1091769 out.go:177] 
	W0318 13:20:08.309186 1091769 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 13:20:08.309207 1091769 out.go:239] * 
	* 
	W0318 13:20:08.313817 1091769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:20:08.315465 1091769 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-942957 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-942957 --wait=true -v=7 --alsologtostderr
E0318 13:22:37.319396 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-942957 --wait=true -v=7 --alsologtostderr: (4m7.308571451s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-942957
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-942957 -n ha-942957
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 logs -n 25
E0318 13:24:17.918839 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-942957 logs -n 25: (2.128043422s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m04 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp testdata/cp-test.txt                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957:/home/docker/cp-test_ha-942957-m04_ha-942957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957 sudo cat                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957.txt                                |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03:/home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m03 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-942957 node stop m02 -v=7                                                    | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-942957 node start m02 -v=7                                                   | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-942957 -v=7                                                          | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-942957 -v=7                                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-942957 --wait=true -v=7                                                   | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:20 UTC | 18 Mar 24 13:24 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-942957                                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:24 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:20:08
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:20:08.378816 1092163 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:20:08.378933 1092163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:20:08.378940 1092163 out.go:304] Setting ErrFile to fd 2...
	I0318 13:20:08.378944 1092163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:20:08.379143 1092163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:20:08.379678 1092163 out.go:298] Setting JSON to false
	I0318 13:20:08.380747 1092163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18155,"bootTime":1710749853,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:20:08.380819 1092163 start.go:139] virtualization: kvm guest
	I0318 13:20:08.383368 1092163 out.go:177] * [ha-942957] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:20:08.385468 1092163 notify.go:220] Checking for updates...
	I0318 13:20:08.385490 1092163 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:20:08.386801 1092163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:20:08.388197 1092163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:20:08.389621 1092163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:20:08.391019 1092163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:20:08.392376 1092163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:20:08.394208 1092163 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:20:08.394351 1092163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:20:08.394819 1092163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:20:08.394876 1092163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:20:08.410437 1092163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0318 13:20:08.410953 1092163 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:20:08.411549 1092163 main.go:141] libmachine: Using API Version  1
	I0318 13:20:08.411574 1092163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:20:08.411934 1092163 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:20:08.412116 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:20:08.449830 1092163 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:20:08.451270 1092163 start.go:297] selected driver: kvm2
	I0318 13:20:08.451293 1092163 start.go:901] validating driver "kvm2" against &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:20:08.451475 1092163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:20:08.451887 1092163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:20:08.451968 1092163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:20:08.468214 1092163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:20:08.468956 1092163 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:20:08.469039 1092163 cni.go:84] Creating CNI manager for ""
	I0318 13:20:08.469063 1092163 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 13:20:08.469124 1092163 start.go:340] cluster config:
	{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:20:08.469276 1092163 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:20:08.471257 1092163 out.go:177] * Starting "ha-942957" primary control-plane node in "ha-942957" cluster
	I0318 13:20:08.472814 1092163 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:20:08.472850 1092163 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:20:08.472858 1092163 cache.go:56] Caching tarball of preloaded images
	I0318 13:20:08.472939 1092163 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:20:08.472955 1092163 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:20:08.473123 1092163 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:20:08.473380 1092163 start.go:360] acquireMachinesLock for ha-942957: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:20:08.473464 1092163 start.go:364] duration metric: took 61.357µs to acquireMachinesLock for "ha-942957"
	I0318 13:20:08.473491 1092163 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:20:08.473503 1092163 fix.go:54] fixHost starting: 
	I0318 13:20:08.473893 1092163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:20:08.473933 1092163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:20:08.488731 1092163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0318 13:20:08.489166 1092163 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:20:08.489686 1092163 main.go:141] libmachine: Using API Version  1
	I0318 13:20:08.489715 1092163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:20:08.490124 1092163 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:20:08.490376 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:20:08.490544 1092163 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:20:08.492158 1092163 fix.go:112] recreateIfNeeded on ha-942957: state=Running err=<nil>
	W0318 13:20:08.492179 1092163 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:20:08.495234 1092163 out.go:177] * Updating the running kvm2 "ha-942957" VM ...
	I0318 13:20:08.496676 1092163 machine.go:94] provisionDockerMachine start ...
	I0318 13:20:08.496697 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:20:08.496915 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.499435 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.499922 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.499964 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.500071 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.500228 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.500407 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.500555 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.500723 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:08.500995 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:08.501009 1092163 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:20:08.609655 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957
	
	I0318 13:20:08.609685 1092163 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:20:08.609932 1092163 buildroot.go:166] provisioning hostname "ha-942957"
	I0318 13:20:08.609977 1092163 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:20:08.610180 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.613000 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.613451 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.613484 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.613629 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.613827 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.613987 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.614120 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.614305 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:08.614570 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:08.614585 1092163 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957 && echo "ha-942957" | sudo tee /etc/hostname
	I0318 13:20:08.742269 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957
	
	I0318 13:20:08.742305 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.745476 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.745923 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.745960 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.746181 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.746400 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.746629 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.746820 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.746987 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:08.747221 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:08.747246 1092163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:20:08.853391 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:20:08.853424 1092163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:20:08.853464 1092163 buildroot.go:174] setting up certificates
	I0318 13:20:08.853478 1092163 provision.go:84] configureAuth start
	I0318 13:20:08.853497 1092163 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:20:08.853773 1092163 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:20:08.856146 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.856534 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.856568 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.856648 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.858817 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.859192 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.859216 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.859382 1092163 provision.go:143] copyHostCerts
	I0318 13:20:08.859429 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:20:08.859476 1092163 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:20:08.859490 1092163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:20:08.859586 1092163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:20:08.859724 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:20:08.859764 1092163 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:20:08.859775 1092163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:20:08.859820 1092163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:20:08.859917 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:20:08.859940 1092163 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:20:08.859946 1092163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:20:08.859982 1092163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:20:08.860085 1092163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957 san=[127.0.0.1 192.168.39.68 ha-942957 localhost minikube]
	I0318 13:20:08.967598 1092163 provision.go:177] copyRemoteCerts
	I0318 13:20:08.967682 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:20:08.967717 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.970562 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.970970 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.970999 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.971238 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.971450 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.971600 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.971763 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:20:09.056652 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:20:09.056728 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:20:09.088063 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:20:09.088161 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0318 13:20:09.116053 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:20:09.116153 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:20:09.145445 1092163 provision.go:87] duration metric: took 291.946863ms to configureAuth
	I0318 13:20:09.145478 1092163 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:20:09.145698 1092163 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:20:09.145784 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:09.148682 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:09.149100 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:09.149128 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:09.149324 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:09.149568 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:09.149744 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:09.149907 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:09.150081 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:09.150273 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:09.150294 1092163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:21:40.002485 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:21:40.002599 1092163 machine.go:97] duration metric: took 1m31.50589929s to provisionDockerMachine
	I0318 13:21:40.002623 1092163 start.go:293] postStartSetup for "ha-942957" (driver="kvm2")
	I0318 13:21:40.002684 1092163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:21:40.002716 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.003181 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:21:40.003272 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.007130 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.007630 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.007661 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.007956 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.008194 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.008383 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.008575 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:21:40.096950 1092163 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:21:40.101869 1092163 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:21:40.101934 1092163 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:21:40.102040 1092163 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:21:40.102128 1092163 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:21:40.102140 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:21:40.102256 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:21:40.113462 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:21:40.142229 1092163 start.go:296] duration metric: took 139.5287ms for postStartSetup
	I0318 13:21:40.142299 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.142661 1092163 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0318 13:21:40.142693 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.145686 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.146184 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.146219 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.146469 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.146702 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.146894 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.147049 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	W0318 13:21:40.231172 1092163 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0318 13:21:40.231204 1092163 fix.go:56] duration metric: took 1m31.757701359s for fixHost
	I0318 13:21:40.231239 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.234153 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.234767 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.234801 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.234966 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.235260 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.235466 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.235633 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.235797 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:21:40.235999 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:21:40.236010 1092163 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:21:40.345552 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710768100.313190208
	
	I0318 13:21:40.345576 1092163 fix.go:216] guest clock: 1710768100.313190208
	I0318 13:21:40.345583 1092163 fix.go:229] Guest: 2024-03-18 13:21:40.313190208 +0000 UTC Remote: 2024-03-18 13:21:40.231218385 +0000 UTC m=+91.903144127 (delta=81.971823ms)
	I0318 13:21:40.345613 1092163 fix.go:200] guest clock delta is within tolerance: 81.971823ms
	I0318 13:21:40.345618 1092163 start.go:83] releasing machines lock for "ha-942957", held for 1m31.872142796s
	I0318 13:21:40.345639 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.345931 1092163 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:21:40.348938 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.349349 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.349370 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.349601 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.350162 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.350400 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.350519 1092163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:21:40.350576 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.350644 1092163 ssh_runner.go:195] Run: cat /version.json
	I0318 13:21:40.350674 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.353438 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.353625 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.353799 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.353828 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.353959 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.354104 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.354128 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.354132 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.354288 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.354302 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.354440 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:21:40.354495 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.354648 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.354779 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:21:40.459133 1092163 ssh_runner.go:195] Run: systemctl --version
	I0318 13:21:40.466080 1092163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:21:40.636674 1092163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:21:40.645350 1092163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:21:40.645430 1092163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:21:40.656320 1092163 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:21:40.656359 1092163 start.go:494] detecting cgroup driver to use...
	I0318 13:21:40.656439 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:21:40.676947 1092163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:21:40.692818 1092163 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:21:40.692892 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:21:40.708499 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:21:40.725712 1092163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:21:40.902609 1092163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:21:41.071291 1092163 docker.go:233] disabling docker service ...
	I0318 13:21:41.071375 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:21:41.094012 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:21:41.110076 1092163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:21:41.279666 1092163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:21:41.450061 1092163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:21:41.466342 1092163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:21:41.485980 1092163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:21:41.486066 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.497609 1092163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:21:41.497690 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.510086 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.522169 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.535338 1092163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:21:41.548295 1092163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:21:41.560014 1092163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:21:41.571018 1092163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:21:41.721801 1092163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:21:46.569386 1092163 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.847541809s)
	I0318 13:21:46.569431 1092163 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:21:46.569481 1092163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:21:46.576638 1092163 start.go:562] Will wait 60s for crictl version
	I0318 13:21:46.576708 1092163 ssh_runner.go:195] Run: which crictl
	I0318 13:21:46.580932 1092163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:21:46.629519 1092163 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:21:46.629631 1092163 ssh_runner.go:195] Run: crio --version
	I0318 13:21:46.667602 1092163 ssh_runner.go:195] Run: crio --version
	I0318 13:21:46.705325 1092163 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:21:46.706867 1092163 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:21:46.709797 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:46.710189 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:46.710218 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:46.710392 1092163 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:21:46.715796 1092163 kubeadm.go:877] updating cluster {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:21:46.716024 1092163 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:21:46.716086 1092163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:21:46.770991 1092163 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:21:46.771017 1092163 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:21:46.771084 1092163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:21:46.818430 1092163 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:21:46.818459 1092163 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:21:46.818469 1092163 kubeadm.go:928] updating node { 192.168.39.68 8443 v1.28.4 crio true true} ...
	I0318 13:21:46.818633 1092163 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:21:46.818702 1092163 ssh_runner.go:195] Run: crio config
	I0318 13:21:46.877206 1092163 cni.go:84] Creating CNI manager for ""
	I0318 13:21:46.877233 1092163 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 13:21:46.877245 1092163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:21:46.877268 1092163 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-942957 NodeName:ha-942957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:21:46.877427 1092163 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-942957"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:21:46.877449 1092163 kube-vip.go:111] generating kube-vip config ...
	I0318 13:21:46.877493 1092163 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:21:46.892327 1092163 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:21:46.892450 1092163 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:21:46.892550 1092163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:21:46.905612 1092163 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:21:46.905681 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 13:21:46.918237 1092163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 13:21:46.937357 1092163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:21:46.955920 1092163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 13:21:46.974163 1092163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:21:46.995212 1092163 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:21:47.000234 1092163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:21:47.156018 1092163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:21:47.171556 1092163 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.68
	I0318 13:21:47.171584 1092163 certs.go:194] generating shared ca certs ...
	I0318 13:21:47.171602 1092163 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:21:47.171763 1092163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:21:47.171803 1092163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:21:47.171812 1092163 certs.go:256] generating profile certs ...
	I0318 13:21:47.171942 1092163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:21:47.171990 1092163 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d
	I0318 13:21:47.172021 1092163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.22 192.168.39.135 192.168.39.254]
	I0318 13:21:47.242691 1092163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d ...
	I0318 13:21:47.242730 1092163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d: {Name:mkb2aa9441539bd10df2b9f542ba2041cc70f24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:21:47.242922 1092163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d ...
	I0318 13:21:47.242936 1092163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d: {Name:mk641cf40bcc588b24d450e29e1193be4a235ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:21:47.243010 1092163 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:21:47.243217 1092163 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:21:47.243361 1092163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:21:47.243379 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:21:47.243391 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:21:47.243404 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:21:47.243417 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:21:47.243428 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:21:47.243439 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:21:47.243459 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:21:47.243472 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:21:47.243525 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:21:47.243555 1092163 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:21:47.243565 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:21:47.243585 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:21:47.243606 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:21:47.243630 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:21:47.243666 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:21:47.243693 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.243707 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.243719 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.244390 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:21:47.273809 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:21:47.333566 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:21:47.361818 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:21:47.388574 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:21:47.416790 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:21:47.443573 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:21:47.470237 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:21:47.497363 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:21:47.526366 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:21:47.552916 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:21:47.579908 1092163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:21:47.598596 1092163 ssh_runner.go:195] Run: openssl version
	I0318 13:21:47.605448 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:21:47.617568 1092163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.622908 1092163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.622991 1092163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.629328 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:21:47.639904 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:21:47.651766 1092163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.656750 1092163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.656829 1092163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.663197 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:21:47.673664 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:21:47.685924 1092163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.690895 1092163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.690977 1092163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.697334 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:21:47.707610 1092163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:21:47.712852 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:21:47.719284 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:21:47.725482 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:21:47.731731 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:21:47.738235 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:21:47.744539 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:21:47.750658 1092163 kubeadm.go:391] StartCluster: {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:21:47.750790 1092163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:21:47.750860 1092163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:21:47.799209 1092163 cri.go:89] found id: "01653352dfce8459170f2cd9ba8aac4f7bab0af56170cce15b3dddc4742c33cf"
	I0318 13:21:47.799237 1092163 cri.go:89] found id: "062711d0381093b919397a578a6f81b07f2007e909c784461f578842f5a218fc"
	I0318 13:21:47.799242 1092163 cri.go:89] found id: "17b5593ecae1319d07eafe30bda73a980b98ac5321fd162acfe59b3400d9c1a5"
	I0318 13:21:47.799249 1092163 cri.go:89] found id: "16c29c371f2defe9234b0a49883af3e411bd07c5e0e171d7b3e812dd9325f474"
	I0318 13:21:47.799253 1092163 cri.go:89] found id: "0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1"
	I0318 13:21:47.799257 1092163 cri.go:89] found id: "3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7"
	I0318 13:21:47.799260 1092163 cri.go:89] found id: "c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67"
	I0318 13:21:47.799264 1092163 cri.go:89] found id: "e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945"
	I0318 13:21:47.799269 1092163 cri.go:89] found id: "3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9"
	I0318 13:21:47.799278 1092163 cri.go:89] found id: "11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1"
	I0318 13:21:47.799297 1092163 cri.go:89] found id: "09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99"
	I0318 13:21:47.799301 1092163 cri.go:89] found id: "829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1"
	I0318 13:21:47.799305 1092163 cri.go:89] found id: "ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7"
	I0318 13:21:47.799309 1092163 cri.go:89] found id: "ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242"
	I0318 13:21:47.799317 1092163 cri.go:89] found id: ""
	I0318 13:21:47.799369 1092163 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.503075637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768256503051322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=912479d1-298a-4628-aff8-9117e98bd2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.503957978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11109773-d4e1-4e2e-b6aa-243a789a00b0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.504020391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11109773-d4e1-4e2e-b6aa-243a789a00b0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.504448663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11109773-d4e1-4e2e-b6aa-243a789a00b0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.554763420Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f2e6607-780b-440b-8261-c7dd9a4679b6 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.554838451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f2e6607-780b-440b-8261-c7dd9a4679b6 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.556135605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcac2f41-8eac-49cc-baca-96b9ccbbb289 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.557929039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768256557902129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcac2f41-8eac-49cc-baca-96b9ccbbb289 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.559007520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f0ee291-abe8-4725-802c-eb2b8d079efe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.559062259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f0ee291-abe8-4725-802c-eb2b8d079efe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.559634839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f0ee291-abe8-4725-802c-eb2b8d079efe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.618501518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5687ec00-10a2-4332-9d24-3de0bd108d67 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.618631737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5687ec00-10a2-4332-9d24-3de0bd108d67 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.620568219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a8d435a-b15c-4c5f-b0f8-661bdc42887d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.621314543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768256621277987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a8d435a-b15c-4c5f-b0f8-661bdc42887d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.622370794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce9a8528-7ffd-41aa-a9ef-484d2571fc21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.622450876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce9a8528-7ffd-41aa-a9ef-484d2571fc21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.623213861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce9a8528-7ffd-41aa-a9ef-484d2571fc21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.673045430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fa998f4-2839-466b-8a04-a299ac0dc104 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.673158861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fa998f4-2839-466b-8a04-a299ac0dc104 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.675269088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c1adf9d-4eee-4e58-988f-70eb57821a7e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.675931585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768256675902069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c1adf9d-4eee-4e58-988f-70eb57821a7e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.678262109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=204b8e4c-3e02-40bb-a9b0-625b0e9b676a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.678365493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=204b8e4c-3e02-40bb-a9b0-625b0e9b676a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:24:16 ha-942957 crio[3995]: time="2024-03-18 13:24:16.679069009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=204b8e4c-3e02-40bb-a9b0-625b0e9b676a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c4599693f823e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      51 seconds ago       Running             storage-provisioner       5                   f9ec5f3d15195       storage-provisioner
	cb03720fabfe4       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   d8718ce51df7d       kindnet-6rgvl
	e302d6170ad88       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   446c946431e4d       kube-controller-manager-ha-942957
	3d89e600a2a98       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   6d5f4d2062041       kube-apiserver-ha-942957
	d0e9ab707f966       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a18c3ff2ffa86       busybox-5b5d89c9d6-h4q2t
	8f0b795d2bdf5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   f9a167f3b0217       kube-proxy-97vsd
	b1d58a64362f8       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   d33166cce5675       kube-vip-ha-942957
	355dac9c183ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   1fb74babe23fc       coredns-5dd5756b68-f6dtz
	f4790438e5d49       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   52ff694a82f3e       coredns-5dd5756b68-pbr9j
	7ead34fbee6f7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   d8718ce51df7d       kindnet-6rgvl
	9c83f6d14628f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   6d5f4d2062041       kube-apiserver-ha-942957
	5051b1a59a5bf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   c3736fc3b4285       kube-scheduler-ha-942957
	dcec3d819d32c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   3b02456c19e41       etcd-ha-942957
	96898006f26bb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   446c946431e4d       kube-controller-manager-ha-942957
	4e9f17e03f23a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   f9ec5f3d15195       storage-provisioner
	0b8695cf0ac16       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  2                   750ec46160c5a       kube-vip-ha-942957
	bc6f97ca3edce       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   a2d21119e214a       busybox-5b5d89c9d6-h4q2t
	c859be2ef6bde       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   0b6911927b37f       coredns-5dd5756b68-f6dtz
	e2cf377b129d8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   3daf97324e58a       coredns-5dd5756b68-pbr9j
	11bc6358bf6d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   c4b520f79bf4b       kube-proxy-97vsd
	09364d1b0b8ec       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago       Exited              kube-scheduler            0                   6e0049bc30922       kube-scheduler-ha-942957
	ac909d1fea8aa       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago       Exited              etcd                      0                   c9e7a1111cb30       etcd-ha-942957
	
	
	==> coredns [355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47177 - 46844 "HINFO IN 3095807628460413902.4050584119888136970. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.07791038s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45980->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67] <==
	[INFO] 10.244.0.4:59741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000711436s
	[INFO] 10.244.1.2:33325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003796173s
	[INFO] 10.244.1.2:40118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184538s
	[INFO] 10.244.1.2:38695 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158047s
	[INFO] 10.244.2.2:39278 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539379s
	[INFO] 10.244.2.2:48574 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165918s
	[INFO] 10.244.0.4:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.0.4:50001 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135799s
	[INFO] 10.244.0.4:49373 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159584s
	[INFO] 10.244.1.2:44441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118463s
	[INFO] 10.244.2.2:42552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221661s
	[INFO] 10.244.2.2:46062 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090758s
	[INFO] 10.244.0.4:53179 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092569s
	[INFO] 10.244.1.2:45351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128077s
	[INFO] 10.244.1.2:52758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144551s
	[INFO] 10.244.1.2:47551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000203433s
	[INFO] 10.244.2.2:53980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115616s
	[INFO] 10.244.2.2:55318 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181469s
	[INFO] 10.244.0.4:60630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069346s
	[INFO] 10.244.0.4:41251 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040242s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945] <==
	[INFO] 10.244.2.2:46720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002275748s
	[INFO] 10.244.2.2:50733 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275044s
	[INFO] 10.244.2.2:37004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138849s
	[INFO] 10.244.2.2:33563 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224767s
	[INFO] 10.244.2.2:42566 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017421s
	[INFO] 10.244.0.4:54486 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00168008s
	[INFO] 10.244.0.4:46746 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363608s
	[INFO] 10.244.0.4:38530 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231105s
	[INFO] 10.244.0.4:47152 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045351s
	[INFO] 10.244.0.4:57247 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070307s
	[INFO] 10.244.1.2:43996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140398s
	[INFO] 10.244.1.2:36237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220389s
	[INFO] 10.244.1.2:37302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111738s
	[INFO] 10.244.2.2:58342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134629s
	[INFO] 10.244.2.2:43645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160061s
	[INFO] 10.244.0.4:58375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210567s
	[INFO] 10.244.0.4:50302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075795s
	[INFO] 10.244.0.4:46012 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084361s
	[INFO] 10.244.1.2:37085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000242114s
	[INFO] 10.244.2.2:47856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192734s
	[INFO] 10.244.2.2:42553 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213437s
	[INFO] 10.244.0.4:53951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102273s
	[INFO] 10.244.0.4:44758 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071111s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42325 - 29984 "HINFO IN 4194563739134571877.5442077711432167674. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046508538s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54200->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-942957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_10_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:10:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:24:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-942957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 98d7d2d7e6f44e39a7470fa399e42587
	  System UUID:                98d7d2d7-e6f4-4e39-a747-0fa399e42587
	  Boot ID:                    8d77322f-23ab-4abb-a476-3a13d0f588c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h4q2t             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-f6dtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5dd5756b68-pbr9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-942957                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-6rgvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-942957             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-942957    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-97vsd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-942957             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-942957                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 100s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-942957 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-942957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-942957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-942957 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Warning  ContainerGCFailed        2m43s (x2 over 3m43s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           87s                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   RegisteredNode           87s                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	
	
	Name:               ha-942957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_12_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:11:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:24:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-942957-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 effa4806d9ac4aae93234a5f4797b41e
	  System UUID:                effa4806-d9ac-4aae-9323-4a5f4797b41e
	  Boot ID:                    0d8480b6-af1f-4533-9aa2-3ade23cb65c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9qmdx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-942957-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-d4smn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-942957-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-942957-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-vjmnr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-942957-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-942957-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  RegisteredNode           12m                  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  NodeNotReady             9m3s                 node-controller  Node ha-942957-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node ha-942957-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node ha-942957-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node ha-942957-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           87s                  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode           87s                  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode           27s                  node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	
	
	Name:               ha-942957-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_13_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:24:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:23:48 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:23:48 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:23:48 +0000   Mon, 18 Mar 2024 13:13:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:23:48 +0000   Mon, 18 Mar 2024 13:13:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    ha-942957-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec2118c8153b4c20b6861bbdce99bda8
	  System UUID:                ec2118c8-153b-4c20-b686-1bbdce99bda8
	  Boot ID:                    e6ebf1fc-dee7-4c19-a873-2e3589ce6a9b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-b64gc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-942957-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-4rf6r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-942957-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-942957-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-rxtls                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-942957-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-942957-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 39s   kube-proxy       
	  Normal   RegisteredNode           11m   node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal   RegisteredNode           11m   node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal   RegisteredNode           10m   node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal   RegisteredNode           87s   node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal   RegisteredNode           87s   node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	  Normal   Starting                 60s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node ha-942957-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node ha-942957-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node ha-942957-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s   kubelet          Node ha-942957-m03 has been rebooted, boot id: e6ebf1fc-dee7-4c19-a873-2e3589ce6a9b
	  Normal   RegisteredNode           27s   node-controller  Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller
	
	
	Name:               ha-942957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_14_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:14:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:24:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:24:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:24:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:24:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:24:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-942957-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16089a645be4a78a5280af4bb880ea8
	  System UUID:                b16089a6-45be-4a78-a528-0af4bb880ea8
	  Boot ID:                    591f9bf4-0d70-41f4-9e0f-5e273cf420c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g4lxl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-gjnnp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-942957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-942957-m04 status is now: NodeReady
	  Normal   RegisteredNode           87s                node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   NodeNotReady             47s                node-controller  Node ha-942957-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-942957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-942957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-942957-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-942957-m04 has been rebooted, boot id: 591f9bf4-0d70-41f4-9e0f-5e273cf420c1
	  Normal   NodeReady                8s                 kubelet          Node ha-942957-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.067435] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059503] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.165737] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136769] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.243119] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.843891] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.062146] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.956739] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.288333] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.601273] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.093539] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.596662] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.054967] kauditd_printk_skb: 53 callbacks suppressed
	[Mar18 13:11] kauditd_printk_skb: 11 callbacks suppressed
	[Mar18 13:19] kauditd_printk_skb: 1 callbacks suppressed
	[Mar18 13:21] systemd-fstab-generator[3918]: Ignoring "noauto" option for root device
	[  +0.174625] systemd-fstab-generator[3930]: Ignoring "noauto" option for root device
	[  +0.208340] systemd-fstab-generator[3944]: Ignoring "noauto" option for root device
	[  +0.167312] systemd-fstab-generator[3956]: Ignoring "noauto" option for root device
	[  +0.280222] systemd-fstab-generator[3980]: Ignoring "noauto" option for root device
	[  +5.424372] systemd-fstab-generator[4080]: Ignoring "noauto" option for root device
	[  +0.095840] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.302809] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 13:22] kauditd_printk_skb: 95 callbacks suppressed
	[ +23.260712] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7] <==
	WARNING: 2024/03/18 13:20:09 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:20:09.338079Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:20:08.960912Z","time spent":"377.156073ms","remote":"127.0.0.1:51394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	WARNING: 2024/03/18 13:20:09 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:20:09.338198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:20:01.720914Z","time spent":"7.615311874s","remote":"127.0.0.1:51056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:10000 "}
	WARNING: 2024/03/18 13:20:09 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:20:09.389644Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:20:09.389852Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:20:09.389944Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"821abe7be15f44a3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-18T13:20:09.390181Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390249Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390281Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390428Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390609Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390643Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390763Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.390813Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.390851Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.390953Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.39103Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.391146Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.391204Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.394132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-18T13:20:09.394324Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-18T13:20:09.394361Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-942957","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	
	
	==> etcd [dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90] <==
	{"level":"warn","ts":"2024-03-18T13:23:17.35982Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.135:2380/version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:17.3599Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:19.12005Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:19.128313Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:21.362424Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.135:2380/version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:21.362502Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:24.120764Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:24.129039Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:25.365049Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.135:2380/version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:25.365136Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:29.121857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:29.129606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:29.367907Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.135:2380/version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:29.368122Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:33.370804Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.135:2380/version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:33.370941Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"8fb0b67bf02b5ef3","error":"Get \"https://192.168.39.135:2380/version\": dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:34.122918Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T13:23:34.130353Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8fb0b67bf02b5ef3","rtt":"0s","error":"dial tcp 192.168.39.135:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-18T13:23:34.799061Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.799856Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.812131Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"8fb0b67bf02b5ef3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T13:23:34.812248Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.815614Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"8fb0b67bf02b5ef3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T13:23:34.815791Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.815993Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	
	
	==> kernel <==
	 13:24:17 up 14 min,  0 users,  load average: 0.80, 0.56, 0.36
	Linux ha-942957 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b] <==
	I0318 13:21:53.994914       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:22:11.802299       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 13:22:17.946223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.55:36934->10.96.0.1:443: read: connection reset by peer
	I0318 13:22:21.018287       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 13:22:27.163619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 13:22:33.308268       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915] <==
	I0318 13:23:44.037473       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:23:54.058872       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:23:54.059443       1 main.go:227] handling current node
	I0318 13:23:54.059464       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:23:54.059474       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:23:54.059732       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:23:54.059767       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:23:54.059869       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:23:54.059911       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:24:04.108803       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:24:04.108936       1 main.go:227] handling current node
	I0318 13:24:04.108986       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:24:04.109016       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:24:04.109216       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:24:04.109272       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:24:04.109385       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:24:04.109417       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:24:14.120965       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:24:14.121010       1 main.go:227] handling current node
	I0318 13:24:14.121021       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:24:14.121034       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:24:14.121152       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0318 13:24:14.121178       1 main.go:250] Node ha-942957-m03 has CIDR [10.244.2.0/24] 
	I0318 13:24:14.121288       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:24:14.121314       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f] <==
	I0318 13:22:35.247970       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:22:35.248193       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:22:35.248311       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:22:35.251052       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:22:35.251141       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:22:35.251239       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:22:35.339613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:22:35.339755       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:22:35.339831       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:22:35.339838       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:22:35.340042       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:22:35.345746       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:22:35.430477       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:22:35.432313       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:22:35.432460       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:22:35.433461       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:22:35.433591       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:22:35.434742       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:22:35.439737       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0318 13:22:35.452183       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.135]
	I0318 13:22:35.456750       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:22:35.468980       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0318 13:22:35.473926       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0318 13:22:36.250266       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 13:22:37.107791       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.135 192.168.39.68]
	
	
	==> kube-apiserver [9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82] <==
	I0318 13:21:54.149792       1 options.go:220] external host was not specified, using 192.168.39.68
	I0318 13:21:54.151137       1 server.go:148] Version: v1.28.4
	I0318 13:21:54.151204       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:21:55.027788       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:21:55.037111       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:21:55.037261       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:21:55.037570       1 instance.go:298] Using reconciler: lease
	W0318 13:22:15.025642       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0318 13:22:15.026077       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0318 13:22:15.038863       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459] <==
	I0318 13:21:54.879794       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:21:55.220513       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:21:55.220712       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:21:55.222926       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:21:55.223129       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:21:55.224392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:21:55.224510       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0318 13:22:16.045195       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.68:8443/healthz\": dial tcp 192.168.39.68:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329] <==
	I0318 13:22:50.818985       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-942957-m02"
	I0318 13:22:50.819075       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-942957-m04"
	I0318 13:22:50.819106       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-942957-m03"
	I0318 13:22:50.819306       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:22:50.819456       1 event.go:307] "Event occurred" object="ha-942957" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-942957 event: Registered Node ha-942957 in Controller"
	I0318 13:22:50.819515       1 event.go:307] "Event occurred" object="ha-942957-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller"
	I0318 13:22:50.821577       1 event.go:307] "Event occurred" object="ha-942957-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-942957-m03 event: Registered Node ha-942957-m03 in Controller"
	I0318 13:22:50.821814       1 event.go:307] "Event occurred" object="ha-942957-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller"
	I0318 13:22:50.820755       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:22:50.824187       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:22:50.830456       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:22:50.836267       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:22:50.885519       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:22:50.905175       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:22:50.923820       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:22:50.928158       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:22:50.941530       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:22:51.358584       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:22:51.397849       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:22:51.398031       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:23:18.330078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="95.467668ms"
	I0318 13:23:18.330349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="67.654µs"
	I0318 13:23:39.551880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.305356ms"
	I0318 13:23:39.552018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.59µs"
	I0318 13:24:09.449921       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-942957-m04"
	
	
	==> kube-proxy [11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1] <==
	E0318 13:18:43.482515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:18:51.418088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:18:51.418179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:18:51.418262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:18:51.418279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:18:51.418325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:18:51.418349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:01.850261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:01.850523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:04.924160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:04.924257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:04.924392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:04.924506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:17.212825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:17.212963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:20.282769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:20.283116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:29.498361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:29.498439       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:20:00.218457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:20:00.218547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:20:03.291311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:20:03.291771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:20:06.362250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:20:06.362359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637] <==
	I0318 13:21:55.276955       1 server_others.go:69] "Using iptables proxy"
	E0318 13:21:56.954412       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:00.028061       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:03.098907       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:09.243751       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:18.458873       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 13:22:36.450635       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0318 13:22:36.543046       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:22:36.543099       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:22:36.549040       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:22:36.549238       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:22:36.549747       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:22:36.549784       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:22:36.556204       1 config.go:188] "Starting service config controller"
	I0318 13:22:36.556296       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:22:36.556332       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:22:36.556336       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:22:36.557112       1 config.go:315] "Starting node config controller"
	I0318 13:22:36.557146       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:22:36.657265       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:22:36.657344       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:22:36.657400       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99] <==
	W0318 13:20:01.816895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:20:01.816952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:20:02.136991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:20:02.137098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:20:02.142599       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:20:02.142696       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:20:02.425796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:20:02.425899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:20:02.514504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:20:02.514602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:20:02.655359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:20:02.655461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:20:03.140737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:20:03.140830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:20:03.320123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:20:03.320257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:20:03.320301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:20:03.320322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:20:03.587024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:20:03.587132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:20:08.719011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 13:20:08.719077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:20:09.281134       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:20:09.281492       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 13:20:09.282223       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f] <==
	W0318 13:22:31.128384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.68:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:31.128466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.68:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:31.133125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:31.133225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:31.570849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.68:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:31.570928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.68:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.614225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.68:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.614344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.68:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.687235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.687316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.908338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.908455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.973804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.973924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:33.006970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:33.007007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:35.318525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:22:35.318583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:22:35.318748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:22:35.318762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:22:35.318812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:22:35.318845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:22:35.319048       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:22:35.319154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:22:35.853167       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:22:35 ha-942957 kubelet[1368]: E0318 13:22:35.090766    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:22:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:22:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:22:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:22:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:22:37 ha-942957 kubelet[1368]: I0318 13:22:37.674071    1368 scope.go:117] "RemoveContainer" containerID="062711d0381093b919397a578a6f81b07f2007e909c784461f578842f5a218fc"
	Mar 18 13:22:37 ha-942957 kubelet[1368]: I0318 13:22:37.674415    1368 scope.go:117] "RemoveContainer" containerID="7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b"
	Mar 18 13:22:37 ha-942957 kubelet[1368]: E0318 13:22:37.674818    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-6rgvl_kube-system(eb410475-7c79-4ac1-b7df-a4781100d228)\"" pod="kube-system/kindnet-6rgvl" podUID="eb410475-7c79-4ac1-b7df-a4781100d228"
	Mar 18 13:22:38 ha-942957 kubelet[1368]: I0318 13:22:38.090605    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-h4q2t" podStartSLOduration=550.442108912 podCreationTimestamp="2024-03-18 13:13:26 +0000 UTC" firstStartedPulling="2024-03-18 13:13:27.595476145 +0000 UTC m=+172.838029429" lastFinishedPulling="2024-03-18 13:13:29.243863232 +0000 UTC m=+174.486416524" observedRunningTime="2024-03-18 13:13:29.879784411 +0000 UTC m=+175.122337717" watchObservedRunningTime="2024-03-18 13:22:38.090496007 +0000 UTC m=+723.333049313"
	Mar 18 13:22:38 ha-942957 kubelet[1368]: I0318 13:22:38.995435    1368 scope.go:117] "RemoveContainer" containerID="96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459"
	Mar 18 13:22:45 ha-942957 kubelet[1368]: I0318 13:22:45.995163    1368 scope.go:117] "RemoveContainer" containerID="4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9"
	Mar 18 13:22:45 ha-942957 kubelet[1368]: E0318 13:22:45.996087    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b67e544b-41f2-4be4-90ed-971378c82a76)\"" pod="kube-system/storage-provisioner" podUID="b67e544b-41f2-4be4-90ed-971378c82a76"
	Mar 18 13:22:50 ha-942957 kubelet[1368]: I0318 13:22:50.995517    1368 scope.go:117] "RemoveContainer" containerID="7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b"
	Mar 18 13:22:50 ha-942957 kubelet[1368]: E0318 13:22:50.996028    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-6rgvl_kube-system(eb410475-7c79-4ac1-b7df-a4781100d228)\"" pod="kube-system/kindnet-6rgvl" podUID="eb410475-7c79-4ac1-b7df-a4781100d228"
	Mar 18 13:22:58 ha-942957 kubelet[1368]: I0318 13:22:58.995892    1368 scope.go:117] "RemoveContainer" containerID="4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9"
	Mar 18 13:22:58 ha-942957 kubelet[1368]: E0318 13:22:58.997447    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b67e544b-41f2-4be4-90ed-971378c82a76)\"" pod="kube-system/storage-provisioner" podUID="b67e544b-41f2-4be4-90ed-971378c82a76"
	Mar 18 13:23:02 ha-942957 kubelet[1368]: I0318 13:23:02.995367    1368 scope.go:117] "RemoveContainer" containerID="7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b"
	Mar 18 13:23:09 ha-942957 kubelet[1368]: I0318 13:23:09.995971    1368 scope.go:117] "RemoveContainer" containerID="4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9"
	Mar 18 13:23:09 ha-942957 kubelet[1368]: E0318 13:23:09.996936    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b67e544b-41f2-4be4-90ed-971378c82a76)\"" pod="kube-system/storage-provisioner" podUID="b67e544b-41f2-4be4-90ed-971378c82a76"
	Mar 18 13:23:24 ha-942957 kubelet[1368]: I0318 13:23:24.995908    1368 scope.go:117] "RemoveContainer" containerID="4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9"
	Mar 18 13:23:35 ha-942957 kubelet[1368]: E0318 13:23:35.072400    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:23:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:23:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:23:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:23:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:24:16.104749 1093227 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18427-1067917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-942957 -n ha-942957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-942957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 stop -v=7 --alsologtostderr: exit status 82 (2m0.510475101s)

                                                
                                                
-- stdout --
	* Stopping node "ha-942957-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:24:36.752368 1093614 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:24:36.752519 1093614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:24:36.752527 1093614 out.go:304] Setting ErrFile to fd 2...
	I0318 13:24:36.752532 1093614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:24:36.752736 1093614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:24:36.752972 1093614 out.go:298] Setting JSON to false
	I0318 13:24:36.753063 1093614 mustload.go:65] Loading cluster: ha-942957
	I0318 13:24:36.753431 1093614 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:24:36.753529 1093614 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:24:36.753724 1093614 mustload.go:65] Loading cluster: ha-942957
	I0318 13:24:36.753883 1093614 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:24:36.753917 1093614 stop.go:39] StopHost: ha-942957-m04
	I0318 13:24:36.754316 1093614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:24:36.754357 1093614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:24:36.770631 1093614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0318 13:24:36.771134 1093614 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:24:36.771877 1093614 main.go:141] libmachine: Using API Version  1
	I0318 13:24:36.771900 1093614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:24:36.772350 1093614 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:24:36.775294 1093614 out.go:177] * Stopping node "ha-942957-m04"  ...
	I0318 13:24:36.776724 1093614 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:24:36.776774 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:24:36.777120 1093614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:24:36.777154 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:24:36.780432 1093614 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:24:36.780888 1093614 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:24:04 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:24:36.780952 1093614 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:24:36.781069 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:24:36.781270 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:24:36.781445 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:24:36.781567 1093614 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	I0318 13:24:36.871735 1093614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:24:36.926013 1093614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:24:36.979032 1093614 main.go:141] libmachine: Stopping "ha-942957-m04"...
	I0318 13:24:36.979091 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:24:36.980729 1093614 main.go:141] libmachine: (ha-942957-m04) Calling .Stop
	I0318 13:24:36.984890 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 0/120
	I0318 13:24:37.986377 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 1/120
	I0318 13:24:38.987728 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 2/120
	I0318 13:24:39.989314 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 3/120
	I0318 13:24:40.990692 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 4/120
	I0318 13:24:41.992202 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 5/120
	I0318 13:24:42.993502 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 6/120
	I0318 13:24:43.995478 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 7/120
	I0318 13:24:44.997431 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 8/120
	I0318 13:24:45.999344 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 9/120
	I0318 13:24:47.001635 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 10/120
	I0318 13:24:48.003198 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 11/120
	I0318 13:24:49.004683 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 12/120
	I0318 13:24:50.006750 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 13/120
	I0318 13:24:51.008170 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 14/120
	I0318 13:24:52.009586 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 15/120
	I0318 13:24:53.010978 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 16/120
	I0318 13:24:54.013032 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 17/120
	I0318 13:24:55.014558 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 18/120
	I0318 13:24:56.016015 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 19/120
	I0318 13:24:57.018143 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 20/120
	I0318 13:24:58.019557 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 21/120
	I0318 13:24:59.021850 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 22/120
	I0318 13:25:00.023180 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 23/120
	I0318 13:25:01.024492 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 24/120
	I0318 13:25:02.026366 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 25/120
	I0318 13:25:03.028185 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 26/120
	I0318 13:25:04.030341 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 27/120
	I0318 13:25:05.032130 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 28/120
	I0318 13:25:06.034530 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 29/120
	I0318 13:25:07.036739 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 30/120
	I0318 13:25:08.038197 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 31/120
	I0318 13:25:09.039735 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 32/120
	I0318 13:25:10.041298 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 33/120
	I0318 13:25:11.042500 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 34/120
	I0318 13:25:12.045080 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 35/120
	I0318 13:25:13.046500 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 36/120
	I0318 13:25:14.048753 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 37/120
	I0318 13:25:15.050617 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 38/120
	I0318 13:25:16.052229 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 39/120
	I0318 13:25:17.054837 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 40/120
	I0318 13:25:18.057027 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 41/120
	I0318 13:25:19.058529 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 42/120
	I0318 13:25:20.060218 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 43/120
	I0318 13:25:21.061821 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 44/120
	I0318 13:25:22.063648 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 45/120
	I0318 13:25:23.065904 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 46/120
	I0318 13:25:24.067438 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 47/120
	I0318 13:25:25.068831 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 48/120
	I0318 13:25:26.070465 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 49/120
	I0318 13:25:27.072503 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 50/120
	I0318 13:25:28.074512 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 51/120
	I0318 13:25:29.075921 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 52/120
	I0318 13:25:30.077457 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 53/120
	I0318 13:25:31.078895 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 54/120
	I0318 13:25:32.080279 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 55/120
	I0318 13:25:33.082565 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 56/120
	I0318 13:25:34.084528 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 57/120
	I0318 13:25:35.086439 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 58/120
	I0318 13:25:36.087652 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 59/120
	I0318 13:25:37.089929 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 60/120
	I0318 13:25:38.091358 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 61/120
	I0318 13:25:39.092721 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 62/120
	I0318 13:25:40.094303 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 63/120
	I0318 13:25:41.095880 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 64/120
	I0318 13:25:42.098046 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 65/120
	I0318 13:25:43.099813 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 66/120
	I0318 13:25:44.101279 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 67/120
	I0318 13:25:45.102686 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 68/120
	I0318 13:25:46.104088 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 69/120
	I0318 13:25:47.106366 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 70/120
	I0318 13:25:48.108090 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 71/120
	I0318 13:25:49.110307 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 72/120
	I0318 13:25:50.112274 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 73/120
	I0318 13:25:51.114504 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 74/120
	I0318 13:25:52.116449 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 75/120
	I0318 13:25:53.117854 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 76/120
	I0318 13:25:54.119688 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 77/120
	I0318 13:25:55.121494 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 78/120
	I0318 13:25:56.123476 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 79/120
	I0318 13:25:57.125471 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 80/120
	I0318 13:25:58.127023 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 81/120
	I0318 13:25:59.128555 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 82/120
	I0318 13:26:00.130734 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 83/120
	I0318 13:26:01.132399 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 84/120
	I0318 13:26:02.134023 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 85/120
	I0318 13:26:03.136002 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 86/120
	I0318 13:26:04.137562 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 87/120
	I0318 13:26:05.139933 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 88/120
	I0318 13:26:06.141301 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 89/120
	I0318 13:26:07.143540 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 90/120
	I0318 13:26:08.145184 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 91/120
	I0318 13:26:09.146463 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 92/120
	I0318 13:26:10.147771 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 93/120
	I0318 13:26:11.149184 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 94/120
	I0318 13:26:12.151043 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 95/120
	I0318 13:26:13.152425 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 96/120
	I0318 13:26:14.154086 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 97/120
	I0318 13:26:15.155612 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 98/120
	I0318 13:26:16.156929 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 99/120
	I0318 13:26:17.159103 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 100/120
	I0318 13:26:18.160691 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 101/120
	I0318 13:26:19.162017 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 102/120
	I0318 13:26:20.163593 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 103/120
	I0318 13:26:21.164896 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 104/120
	I0318 13:26:22.166300 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 105/120
	I0318 13:26:23.167928 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 106/120
	I0318 13:26:24.169631 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 107/120
	I0318 13:26:25.171764 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 108/120
	I0318 13:26:26.173148 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 109/120
	I0318 13:26:27.175437 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 110/120
	I0318 13:26:28.176939 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 111/120
	I0318 13:26:29.178401 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 112/120
	I0318 13:26:30.179869 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 113/120
	I0318 13:26:31.181458 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 114/120
	I0318 13:26:32.183186 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 115/120
	I0318 13:26:33.185674 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 116/120
	I0318 13:26:34.188327 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 117/120
	I0318 13:26:35.189974 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 118/120
	I0318 13:26:36.191456 1093614 main.go:141] libmachine: (ha-942957-m04) Waiting for machine to stop 119/120
	I0318 13:26:37.192074 1093614 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 13:26:37.192165 1093614 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 13:26:37.194268 1093614 out.go:177] 
	W0318 13:26:37.195816 1093614 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 13:26:37.195855 1093614 out.go:239] * 
	* 
	W0318 13:26:37.200925 1093614 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:26:37.202448 1093614 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-942957 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr: exit status 3 (19.028484117s)

                                                
                                                
-- stdout --
	ha-942957
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942957-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:26:37.265157 1093916 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:26:37.265292 1093916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:26:37.265304 1093916 out.go:304] Setting ErrFile to fd 2...
	I0318 13:26:37.265311 1093916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:26:37.265537 1093916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:26:37.265773 1093916 out.go:298] Setting JSON to false
	I0318 13:26:37.265819 1093916 mustload.go:65] Loading cluster: ha-942957
	I0318 13:26:37.265916 1093916 notify.go:220] Checking for updates...
	I0318 13:26:37.267274 1093916 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:26:37.267366 1093916 status.go:255] checking status of ha-942957 ...
	I0318 13:26:37.268461 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.268558 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.289153 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0318 13:26:37.289684 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.290333 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.290363 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.290790 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.291102 1093916 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:26:37.292754 1093916 status.go:330] ha-942957 host status = "Running" (err=<nil>)
	I0318 13:26:37.292774 1093916 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:26:37.293085 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.293133 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.308267 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I0318 13:26:37.308660 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.309189 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.309216 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.309548 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.309731 1093916 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:26:37.312549 1093916 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:26:37.312920 1093916 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:26:37.312943 1093916 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:26:37.313062 1093916 host.go:66] Checking if "ha-942957" exists ...
	I0318 13:26:37.313356 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.313398 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.328874 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42871
	I0318 13:26:37.329367 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.329898 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.329927 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.330292 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.330495 1093916 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:26:37.330727 1093916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:26:37.330756 1093916 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:26:37.333774 1093916 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:26:37.334212 1093916 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:26:37.334235 1093916 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:26:37.334385 1093916 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:26:37.334588 1093916 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:26:37.334726 1093916 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:26:37.334857 1093916 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:26:37.422424 1093916 ssh_runner.go:195] Run: systemctl --version
	I0318 13:26:37.429966 1093916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:26:37.450279 1093916 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:26:37.450311 1093916 api_server.go:166] Checking apiserver status ...
	I0318 13:26:37.450347 1093916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:26:37.466534 1093916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5189/cgroup
	W0318 13:26:37.476414 1093916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:26:37.476507 1093916 ssh_runner.go:195] Run: ls
	I0318 13:26:37.481167 1093916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:26:37.486316 1093916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:26:37.486345 1093916 status.go:422] ha-942957 apiserver status = Running (err=<nil>)
	I0318 13:26:37.486356 1093916 status.go:257] ha-942957 status: &{Name:ha-942957 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:26:37.486381 1093916 status.go:255] checking status of ha-942957-m02 ...
	I0318 13:26:37.486797 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.486840 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.502089 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0318 13:26:37.502495 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.502920 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.502952 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.503336 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.503600 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .GetState
	I0318 13:26:37.505498 1093916 status.go:330] ha-942957-m02 host status = "Running" (err=<nil>)
	I0318 13:26:37.505515 1093916 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:26:37.505791 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.505827 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.522415 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37539
	I0318 13:26:37.522880 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.523356 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.523380 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.523706 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.523906 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .GetIP
	I0318 13:26:37.527070 1093916 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:26:37.527522 1093916 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:21:59 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:26:37.527548 1093916 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:26:37.527738 1093916 host.go:66] Checking if "ha-942957-m02" exists ...
	I0318 13:26:37.528081 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.528143 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.543844 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0318 13:26:37.544290 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.544847 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.544882 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.545243 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.545461 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .DriverName
	I0318 13:26:37.545642 1093916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:26:37.545662 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHHostname
	I0318 13:26:37.548551 1093916 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:26:37.549042 1093916 main.go:141] libmachine: (ha-942957-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c9:87", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:21:59 +0000 UTC Type:0 Mac:52:54:00:20:c9:87 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-942957-m02 Clientid:01:52:54:00:20:c9:87}
	I0318 13:26:37.549063 1093916 main.go:141] libmachine: (ha-942957-m02) DBG | domain ha-942957-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:20:c9:87 in network mk-ha-942957
	I0318 13:26:37.549215 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHPort
	I0318 13:26:37.549408 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHKeyPath
	I0318 13:26:37.549560 1093916 main.go:141] libmachine: (ha-942957-m02) Calling .GetSSHUsername
	I0318 13:26:37.549699 1093916 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m02/id_rsa Username:docker}
	I0318 13:26:37.638299 1093916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:26:37.658568 1093916 kubeconfig.go:125] found "ha-942957" server: "https://192.168.39.254:8443"
	I0318 13:26:37.658624 1093916 api_server.go:166] Checking apiserver status ...
	I0318 13:26:37.658673 1093916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:26:37.680037 1093916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0318 13:26:37.692380 1093916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:26:37.692482 1093916 ssh_runner.go:195] Run: ls
	I0318 13:26:37.699537 1093916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 13:26:37.704381 1093916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 13:26:37.704406 1093916 status.go:422] ha-942957-m02 apiserver status = Running (err=<nil>)
	I0318 13:26:37.704419 1093916 status.go:257] ha-942957-m02 status: &{Name:ha-942957-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:26:37.704440 1093916 status.go:255] checking status of ha-942957-m04 ...
	I0318 13:26:37.704748 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.704814 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.719910 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0318 13:26:37.720392 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.720848 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.720871 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.721274 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.721498 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .GetState
	I0318 13:26:37.723268 1093916 status.go:330] ha-942957-m04 host status = "Running" (err=<nil>)
	I0318 13:26:37.723298 1093916 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:26:37.723598 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.723634 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.740132 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0318 13:26:37.740633 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.741167 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.741191 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.741595 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.741825 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .GetIP
	I0318 13:26:37.744579 1093916 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:26:37.745025 1093916 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:24:04 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:26:37.745045 1093916 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:26:37.745213 1093916 host.go:66] Checking if "ha-942957-m04" exists ...
	I0318 13:26:37.745502 1093916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:26:37.745544 1093916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:26:37.760403 1093916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37775
	I0318 13:26:37.760888 1093916 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:26:37.761375 1093916 main.go:141] libmachine: Using API Version  1
	I0318 13:26:37.761394 1093916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:26:37.761730 1093916 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:26:37.761907 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .DriverName
	I0318 13:26:37.762099 1093916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:26:37.762120 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHHostname
	I0318 13:26:37.764773 1093916 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:26:37.765146 1093916 main.go:141] libmachine: (ha-942957-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:61:2d", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:24:04 +0000 UTC Type:0 Mac:52:54:00:11:61:2d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-942957-m04 Clientid:01:52:54:00:11:61:2d}
	I0318 13:26:37.765190 1093916 main.go:141] libmachine: (ha-942957-m04) DBG | domain ha-942957-m04 has defined IP address 192.168.39.221 and MAC address 52:54:00:11:61:2d in network mk-ha-942957
	I0318 13:26:37.765299 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHPort
	I0318 13:26:37.765458 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHKeyPath
	I0318 13:26:37.765581 1093916 main.go:141] libmachine: (ha-942957-m04) Calling .GetSSHUsername
	I0318 13:26:37.765752 1093916 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957-m04/id_rsa Username:docker}
	W0318 13:26:56.232056 1093916 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.221:22: connect: no route to host
	W0318 13:26:56.232169 1093916 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	E0318 13:26:56.232187 1093916 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	I0318 13:26:56.232195 1093916 status.go:257] ha-942957-m04 status: &{Name:ha-942957-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:26:56.232237 1093916 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-942957 -n ha-942957
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-942957 logs -n 25: (1.820102679s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m04 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp testdata/cp-test.txt                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957:/home/docker/cp-test_ha-942957-m04_ha-942957.txt                      |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957 sudo cat                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957.txt                                |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m02:/home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m02 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m03:/home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n                                                                | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | ha-942957-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-942957 ssh -n ha-942957-m03 sudo cat                                         | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-942957 node stop m02 -v=7                                                    | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-942957 node start m02 -v=7                                                   | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-942957 -v=7                                                          | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-942957 -v=7                                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-942957 --wait=true -v=7                                                   | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:20 UTC | 18 Mar 24 13:24 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-942957                                                               | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:24 UTC |                     |
	| node    | ha-942957 node delete m03 -v=7                                                  | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:24 UTC | 18 Mar 24 13:24 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-942957 stop -v=7                                                             | ha-942957 | jenkins | v1.32.0 | 18 Mar 24 13:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:20:08
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:20:08.378816 1092163 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:20:08.378933 1092163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:20:08.378940 1092163 out.go:304] Setting ErrFile to fd 2...
	I0318 13:20:08.378944 1092163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:20:08.379143 1092163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:20:08.379678 1092163 out.go:298] Setting JSON to false
	I0318 13:20:08.380747 1092163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18155,"bootTime":1710749853,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:20:08.380819 1092163 start.go:139] virtualization: kvm guest
	I0318 13:20:08.383368 1092163 out.go:177] * [ha-942957] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:20:08.385468 1092163 notify.go:220] Checking for updates...
	I0318 13:20:08.385490 1092163 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:20:08.386801 1092163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:20:08.388197 1092163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:20:08.389621 1092163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:20:08.391019 1092163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:20:08.392376 1092163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:20:08.394208 1092163 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:20:08.394351 1092163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:20:08.394819 1092163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:20:08.394876 1092163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:20:08.410437 1092163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0318 13:20:08.410953 1092163 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:20:08.411549 1092163 main.go:141] libmachine: Using API Version  1
	I0318 13:20:08.411574 1092163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:20:08.411934 1092163 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:20:08.412116 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:20:08.449830 1092163 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:20:08.451270 1092163 start.go:297] selected driver: kvm2
	I0318 13:20:08.451293 1092163 start.go:901] validating driver "kvm2" against &{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:20:08.451475 1092163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:20:08.451887 1092163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:20:08.451968 1092163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:20:08.468214 1092163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:20:08.468956 1092163 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:20:08.469039 1092163 cni.go:84] Creating CNI manager for ""
	I0318 13:20:08.469063 1092163 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 13:20:08.469124 1092163 start.go:340] cluster config:
	{Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:20:08.469276 1092163 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:20:08.471257 1092163 out.go:177] * Starting "ha-942957" primary control-plane node in "ha-942957" cluster
	I0318 13:20:08.472814 1092163 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:20:08.472850 1092163 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:20:08.472858 1092163 cache.go:56] Caching tarball of preloaded images
	I0318 13:20:08.472939 1092163 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:20:08.472955 1092163 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:20:08.473123 1092163 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/config.json ...
	I0318 13:20:08.473380 1092163 start.go:360] acquireMachinesLock for ha-942957: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:20:08.473464 1092163 start.go:364] duration metric: took 61.357µs to acquireMachinesLock for "ha-942957"
	I0318 13:20:08.473491 1092163 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:20:08.473503 1092163 fix.go:54] fixHost starting: 
	I0318 13:20:08.473893 1092163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:20:08.473933 1092163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:20:08.488731 1092163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0318 13:20:08.489166 1092163 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:20:08.489686 1092163 main.go:141] libmachine: Using API Version  1
	I0318 13:20:08.489715 1092163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:20:08.490124 1092163 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:20:08.490376 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:20:08.490544 1092163 main.go:141] libmachine: (ha-942957) Calling .GetState
	I0318 13:20:08.492158 1092163 fix.go:112] recreateIfNeeded on ha-942957: state=Running err=<nil>
	W0318 13:20:08.492179 1092163 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:20:08.495234 1092163 out.go:177] * Updating the running kvm2 "ha-942957" VM ...
	I0318 13:20:08.496676 1092163 machine.go:94] provisionDockerMachine start ...
	I0318 13:20:08.496697 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:20:08.496915 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.499435 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.499922 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.499964 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.500071 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.500228 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.500407 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.500555 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.500723 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:08.500995 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:08.501009 1092163 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:20:08.609655 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957
	
	I0318 13:20:08.609685 1092163 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:20:08.609932 1092163 buildroot.go:166] provisioning hostname "ha-942957"
	I0318 13:20:08.609977 1092163 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:20:08.610180 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.613000 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.613451 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.613484 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.613629 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.613827 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.613987 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.614120 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.614305 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:08.614570 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:08.614585 1092163 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-942957 && echo "ha-942957" | sudo tee /etc/hostname
	I0318 13:20:08.742269 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-942957
	
	I0318 13:20:08.742305 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.745476 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.745923 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.745960 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.746181 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.746400 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.746629 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.746820 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.746987 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:08.747221 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:08.747246 1092163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-942957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-942957/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-942957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:20:08.853391 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:20:08.853424 1092163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:20:08.853464 1092163 buildroot.go:174] setting up certificates
	I0318 13:20:08.853478 1092163 provision.go:84] configureAuth start
	I0318 13:20:08.853497 1092163 main.go:141] libmachine: (ha-942957) Calling .GetMachineName
	I0318 13:20:08.853773 1092163 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:20:08.856146 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.856534 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.856568 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.856648 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.858817 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.859192 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.859216 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.859382 1092163 provision.go:143] copyHostCerts
	I0318 13:20:08.859429 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:20:08.859476 1092163 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:20:08.859490 1092163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:20:08.859586 1092163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:20:08.859724 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:20:08.859764 1092163 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:20:08.859775 1092163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:20:08.859820 1092163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:20:08.859917 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:20:08.859940 1092163 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:20:08.859946 1092163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:20:08.859982 1092163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:20:08.860085 1092163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.ha-942957 san=[127.0.0.1 192.168.39.68 ha-942957 localhost minikube]
	I0318 13:20:08.967598 1092163 provision.go:177] copyRemoteCerts
	I0318 13:20:08.967682 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:20:08.967717 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:08.970562 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.970970 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:08.970999 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:08.971238 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:08.971450 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:08.971600 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:08.971763 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:20:09.056652 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:20:09.056728 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:20:09.088063 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:20:09.088161 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0318 13:20:09.116053 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:20:09.116153 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:20:09.145445 1092163 provision.go:87] duration metric: took 291.946863ms to configureAuth
	I0318 13:20:09.145478 1092163 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:20:09.145698 1092163 config.go:182] Loaded profile config "ha-942957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:20:09.145784 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:20:09.148682 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:09.149100 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:20:09.149128 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:20:09.149324 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:20:09.149568 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:09.149744 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:20:09.149907 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:20:09.150081 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:20:09.150273 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:20:09.150294 1092163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:21:40.002485 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:21:40.002599 1092163 machine.go:97] duration metric: took 1m31.50589929s to provisionDockerMachine
	I0318 13:21:40.002623 1092163 start.go:293] postStartSetup for "ha-942957" (driver="kvm2")
	I0318 13:21:40.002684 1092163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:21:40.002716 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.003181 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:21:40.003272 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.007130 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.007630 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.007661 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.007956 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.008194 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.008383 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.008575 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:21:40.096950 1092163 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:21:40.101869 1092163 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:21:40.101934 1092163 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:21:40.102040 1092163 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:21:40.102128 1092163 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:21:40.102140 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:21:40.102256 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:21:40.113462 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:21:40.142229 1092163 start.go:296] duration metric: took 139.5287ms for postStartSetup
	I0318 13:21:40.142299 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.142661 1092163 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0318 13:21:40.142693 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.145686 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.146184 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.146219 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.146469 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.146702 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.146894 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.147049 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	W0318 13:21:40.231172 1092163 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0318 13:21:40.231204 1092163 fix.go:56] duration metric: took 1m31.757701359s for fixHost
	I0318 13:21:40.231239 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.234153 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.234767 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.234801 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.234966 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.235260 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.235466 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.235633 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.235797 1092163 main.go:141] libmachine: Using SSH client type: native
	I0318 13:21:40.235999 1092163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0318 13:21:40.236010 1092163 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:21:40.345552 1092163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710768100.313190208
	
	I0318 13:21:40.345576 1092163 fix.go:216] guest clock: 1710768100.313190208
	I0318 13:21:40.345583 1092163 fix.go:229] Guest: 2024-03-18 13:21:40.313190208 +0000 UTC Remote: 2024-03-18 13:21:40.231218385 +0000 UTC m=+91.903144127 (delta=81.971823ms)
	I0318 13:21:40.345613 1092163 fix.go:200] guest clock delta is within tolerance: 81.971823ms
	I0318 13:21:40.345618 1092163 start.go:83] releasing machines lock for "ha-942957", held for 1m31.872142796s
	I0318 13:21:40.345639 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.345931 1092163 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:21:40.348938 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.349349 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.349370 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.349601 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.350162 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.350400 1092163 main.go:141] libmachine: (ha-942957) Calling .DriverName
	I0318 13:21:40.350519 1092163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:21:40.350576 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.350644 1092163 ssh_runner.go:195] Run: cat /version.json
	I0318 13:21:40.350674 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHHostname
	I0318 13:21:40.353438 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.353625 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.353799 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.353828 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.353959 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.354104 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:40.354128 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:40.354132 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.354288 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.354302 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHPort
	I0318 13:21:40.354440 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:21:40.354495 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHKeyPath
	I0318 13:21:40.354648 1092163 main.go:141] libmachine: (ha-942957) Calling .GetSSHUsername
	I0318 13:21:40.354779 1092163 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/ha-942957/id_rsa Username:docker}
	I0318 13:21:40.459133 1092163 ssh_runner.go:195] Run: systemctl --version
	I0318 13:21:40.466080 1092163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:21:40.636674 1092163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:21:40.645350 1092163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:21:40.645430 1092163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:21:40.656320 1092163 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:21:40.656359 1092163 start.go:494] detecting cgroup driver to use...
	I0318 13:21:40.656439 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:21:40.676947 1092163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:21:40.692818 1092163 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:21:40.692892 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:21:40.708499 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:21:40.725712 1092163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:21:40.902609 1092163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:21:41.071291 1092163 docker.go:233] disabling docker service ...
	I0318 13:21:41.071375 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:21:41.094012 1092163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:21:41.110076 1092163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:21:41.279666 1092163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:21:41.450061 1092163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:21:41.466342 1092163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:21:41.485980 1092163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:21:41.486066 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.497609 1092163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:21:41.497690 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.510086 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.522169 1092163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:21:41.535338 1092163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:21:41.548295 1092163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:21:41.560014 1092163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:21:41.571018 1092163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:21:41.721801 1092163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:21:46.569386 1092163 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.847541809s)
	I0318 13:21:46.569431 1092163 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:21:46.569481 1092163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:21:46.576638 1092163 start.go:562] Will wait 60s for crictl version
	I0318 13:21:46.576708 1092163 ssh_runner.go:195] Run: which crictl
	I0318 13:21:46.580932 1092163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:21:46.629519 1092163 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:21:46.629631 1092163 ssh_runner.go:195] Run: crio --version
	I0318 13:21:46.667602 1092163 ssh_runner.go:195] Run: crio --version
	I0318 13:21:46.705325 1092163 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:21:46.706867 1092163 main.go:141] libmachine: (ha-942957) Calling .GetIP
	I0318 13:21:46.709797 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:46.710189 1092163 main.go:141] libmachine: (ha-942957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:d5:73", ip: ""} in network mk-ha-942957: {Iface:virbr1 ExpiryTime:2024-03-18 14:10:06 +0000 UTC Type:0 Mac:52:54:00:1a:d5:73 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-942957 Clientid:01:52:54:00:1a:d5:73}
	I0318 13:21:46.710218 1092163 main.go:141] libmachine: (ha-942957) DBG | domain ha-942957 has defined IP address 192.168.39.68 and MAC address 52:54:00:1a:d5:73 in network mk-ha-942957
	I0318 13:21:46.710392 1092163 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:21:46.715796 1092163 kubeadm.go:877] updating cluster {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:21:46.716024 1092163 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:21:46.716086 1092163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:21:46.770991 1092163 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:21:46.771017 1092163 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:21:46.771084 1092163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:21:46.818430 1092163 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:21:46.818459 1092163 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:21:46.818469 1092163 kubeadm.go:928] updating node { 192.168.39.68 8443 v1.28.4 crio true true} ...
	I0318 13:21:46.818633 1092163 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-942957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:21:46.818702 1092163 ssh_runner.go:195] Run: crio config
	I0318 13:21:46.877206 1092163 cni.go:84] Creating CNI manager for ""
	I0318 13:21:46.877233 1092163 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 13:21:46.877245 1092163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:21:46.877268 1092163 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-942957 NodeName:ha-942957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:21:46.877427 1092163 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-942957"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:21:46.877449 1092163 kube-vip.go:111] generating kube-vip config ...
	I0318 13:21:46.877493 1092163 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 13:21:46.892327 1092163 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 13:21:46.892450 1092163 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 13:21:46.892550 1092163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:21:46.905612 1092163 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:21:46.905681 1092163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 13:21:46.918237 1092163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 13:21:46.937357 1092163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:21:46.955920 1092163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 13:21:46.974163 1092163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 13:21:46.995212 1092163 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 13:21:47.000234 1092163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:21:47.156018 1092163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:21:47.171556 1092163 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957 for IP: 192.168.39.68
	I0318 13:21:47.171584 1092163 certs.go:194] generating shared ca certs ...
	I0318 13:21:47.171602 1092163 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:21:47.171763 1092163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:21:47.171803 1092163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:21:47.171812 1092163 certs.go:256] generating profile certs ...
	I0318 13:21:47.171942 1092163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/client.key
	I0318 13:21:47.171990 1092163 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d
	I0318 13:21:47.172021 1092163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68 192.168.39.22 192.168.39.135 192.168.39.254]
	I0318 13:21:47.242691 1092163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d ...
	I0318 13:21:47.242730 1092163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d: {Name:mkb2aa9441539bd10df2b9f542ba2041cc70f24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:21:47.242922 1092163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d ...
	I0318 13:21:47.242936 1092163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d: {Name:mk641cf40bcc588b24d450e29e1193be4a235ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:21:47.243010 1092163 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt.1ed4ec2d -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt
	I0318 13:21:47.243217 1092163 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key.1ed4ec2d -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key
	I0318 13:21:47.243361 1092163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key
	I0318 13:21:47.243379 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:21:47.243391 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:21:47.243404 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:21:47.243417 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:21:47.243428 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:21:47.243439 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:21:47.243459 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:21:47.243472 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:21:47.243525 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:21:47.243555 1092163 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:21:47.243565 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:21:47.243585 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:21:47.243606 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:21:47.243630 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:21:47.243666 1092163 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:21:47.243693 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.243707 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.243719 1092163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.244390 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:21:47.273809 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:21:47.333566 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:21:47.361818 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:21:47.388574 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:21:47.416790 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:21:47.443573 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:21:47.470237 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/ha-942957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:21:47.497363 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:21:47.526366 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:21:47.552916 1092163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:21:47.579908 1092163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:21:47.598596 1092163 ssh_runner.go:195] Run: openssl version
	I0318 13:21:47.605448 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:21:47.617568 1092163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.622908 1092163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.622991 1092163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:21:47.629328 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:21:47.639904 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:21:47.651766 1092163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.656750 1092163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.656829 1092163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:21:47.663197 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:21:47.673664 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:21:47.685924 1092163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.690895 1092163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.690977 1092163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:21:47.697334 1092163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:21:47.707610 1092163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:21:47.712852 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:21:47.719284 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:21:47.725482 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:21:47.731731 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:21:47.738235 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:21:47.744539 1092163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:21:47.750658 1092163 kubeadm.go:391] StartCluster: {Name:ha-942957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-942957 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.135 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:21:47.750790 1092163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:21:47.750860 1092163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:21:47.799209 1092163 cri.go:89] found id: "01653352dfce8459170f2cd9ba8aac4f7bab0af56170cce15b3dddc4742c33cf"
	I0318 13:21:47.799237 1092163 cri.go:89] found id: "062711d0381093b919397a578a6f81b07f2007e909c784461f578842f5a218fc"
	I0318 13:21:47.799242 1092163 cri.go:89] found id: "17b5593ecae1319d07eafe30bda73a980b98ac5321fd162acfe59b3400d9c1a5"
	I0318 13:21:47.799249 1092163 cri.go:89] found id: "16c29c371f2defe9234b0a49883af3e411bd07c5e0e171d7b3e812dd9325f474"
	I0318 13:21:47.799253 1092163 cri.go:89] found id: "0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1"
	I0318 13:21:47.799257 1092163 cri.go:89] found id: "3084769e1ff800f860efac29271cdcd098fb57447c7f13bd9fec037208560ad7"
	I0318 13:21:47.799260 1092163 cri.go:89] found id: "c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67"
	I0318 13:21:47.799264 1092163 cri.go:89] found id: "e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945"
	I0318 13:21:47.799269 1092163 cri.go:89] found id: "3a01c2a33ecf6a0d98f22101c842d00f4a021364fa1c741b0fa3c0f28f85f8b9"
	I0318 13:21:47.799278 1092163 cri.go:89] found id: "11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1"
	I0318 13:21:47.799297 1092163 cri.go:89] found id: "09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99"
	I0318 13:21:47.799301 1092163 cri.go:89] found id: "829af6255f575106067897607206cf08a66f503d77bc7c61af6fb6dac2ab31a1"
	I0318 13:21:47.799305 1092163 cri.go:89] found id: "ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7"
	I0318 13:21:47.799309 1092163 cri.go:89] found id: "ff86796bcd1517e906cd7a34813fa3d47a72e26956b8903f3f142abb0f959242"
	I0318 13:21:47.799317 1092163 cri.go:89] found id: ""
	I0318 13:21:47.799369 1092163 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.952618645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2d358be-c30a-472f-bee2-87dc4ebe0ba0 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.955325228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0650b933-24ea-4173-9a37-d9f85ced51a3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.955844467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768416955817035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0650b933-24ea-4173-9a37-d9f85ced51a3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.956510082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcadc9da-6fb0-4b36-98b9-85a7a81dcb00 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.956752984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcadc9da-6fb0-4b36-98b9-85a7a81dcb00 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.957209443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcadc9da-6fb0-4b36-98b9-85a7a81dcb00 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.996964654Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98340f38-cc7d-4996-8299-94eb84390783 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.997254926Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-h4q2t,Uid:19f21998-36db-4286-8e31-bf260f71ea46,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768146159089688,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:13:26.778880953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-pbr9j,Uid:b011a4b6-807e-4af3-90f5-bc9af8ccd454,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1710768112569145515,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:10:53.645043038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-f6dtz,Uid:78994887-c343-49aa-bc5d-e099da752ad6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112545044984,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.see
n: 2024-03-18T13:10:53.659633844Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-942957,Uid:16a417deef6e3d8a8645cbf67a3c4710,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112470190649,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 16a417deef6e3d8a8645cbf67a3c4710,kubernetes.io/config.seen: 2024-03-18T13:10:34.875732759Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-942957,Uid:c504e52bc56157cc75ee682693876de8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READ
Y,CreatedAt:1710768112469269230,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{kubernetes.io/config.hash: c504e52bc56157cc75ee682693876de8,kubernetes.io/config.seen: 2024-03-18T13:10:34.875736134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&PodSandboxMetadata{Name:etcd-ha-942957,Uid:88d65e66a53e070453c2f9383385fd98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112465041291,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.68:2379
,kubernetes.io/config.hash: 88d65e66a53e070453c2f9383385fd98,kubernetes.io/config.seen: 2024-03-18T13:10:34.875737089Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-942957,Uid:3ac151e5f24dd4f2efdd1ee0628307c8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112461618214,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3ac151e5f24dd4f2efdd1ee0628307c8,kubernetes.io/config.seen: 2024-03-18T13:10:34.875738596Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&PodSandboxMetadata{
Name:kindnet-6rgvl,Uid:eb410475-7c79-4ac1-b7df-a4781100d228,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112445595985,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:10:48.158017692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&PodSandboxMetadata{Name:kube-proxy-97vsd,Uid:a4d03704-5a4b-4973-b178-912218d00802,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112445099094,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:10:48.171008000Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-942957,Uid:282f67810906176442b1ebeaef8ea17d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112433704435,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.68:8443,kubernetes.io/config.hash: 282f67810906176442b1ebeaef8ea17d,kubernetes.io/config.seen: 2024-03-18T13:10:34.875737893Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b67e544b-41f2-4be4-90ed-971378c82a76,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710768112431518259,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:10:53.655742968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=98340f38-cc7d-4996-8299-94eb84390783 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.998052410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=861feab2-286f-4d38-8e1d-411e7d0bbbce name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.998145920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=861feab2-286f-4d38-8e1d-411e7d0bbbce name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:56 ha-942957 crio[3995]: time="2024-03-18 13:26:56.998401058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417de
ef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=861feab2-286f-4d38-8e1d-411e7d0bbbce name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.005940241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2a843d7-fb3f-4f0c-a870-7837f4e9c1ca name=/runtime.v1.RuntimeService/Version
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.006045684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2a843d7-fb3f-4f0c-a870-7837f4e9c1ca name=/runtime.v1.RuntimeService/Version
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.010719833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18af9602-298b-4cca-a5ab-eba6b8e0ffb1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.011266715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768417011242623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18af9602-298b-4cca-a5ab-eba6b8e0ffb1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.014971849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fd959d9-2e41-4feb-8795-23ca88b76bf4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.015095478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fd959d9-2e41-4feb-8795-23ca88b76bf4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.015530117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fd959d9-2e41-4feb-8795-23ca88b76bf4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.058455083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=812327e7-7c96-40c3-84ec-fc6cb330a847 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.058579218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=812327e7-7c96-40c3-84ec-fc6cb330a847 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.059767981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43a8450f-5762-4f5c-9141-d716aa618275 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.060346098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768417060321083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43a8450f-5762-4f5c-9141-d716aa618275 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.060932403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df98c724-8ef8-4a7a-ba40-8e80faf9a03a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.061008885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df98c724-8ef8-4a7a-ba40-8e80faf9a03a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:26:57 ha-942957 crio[3995]: time="2024-03-18 13:26:57.061452238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4599693f823e6f478a7a630e48b8674ad7c69329a3629729c9a8f618c02e5c7,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768205014900453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.container.hash: cf352393,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710768183016581645,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710768159008079630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710768153014877517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e9ab707f966c8321bddb3178cbf660b8a9043e64cccd383de342fac31c59c1,PodSandboxId:a18c3ff2ffa868df97b01e93d8c2fcb8cccc618913371cb02fec37bea5e1f336,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710768146346203966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637,PodSandboxId:f9a167f3b0217a2c19b61d4839225f1770ab7d71cf5088a455c11aea68920d25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710768113827080591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:b1d58a64362f84f8062abcbcf47099323b0cc69785aecf233a1f559245de2e27,PodSandboxId:d33166cce56756d4692cbc70da65786ec6a41e7e75ff804223349b38ce0c2491,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710768113617284640,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652,PodSandboxId:1fb74babe23fc821be46df51b5aa6ff7451cbb71c03b28f2d94a9d5c768ad0f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113469762995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b,PodSandboxId:d8718ce51df7d20c009bc6a8bc10f3f000459aef35c2d475e6a711eb0df736a4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710768113203439888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6rgvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb410475-7c79-4ac1-b7df-a4781100d228,},Annotations:map[string]string{io.kubernetes.container.hash: 4e02971,io.kubernetes.container.restartCount: 2,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b,PodSandboxId:52ff694a82f3e8179666764794f97e5f39c5d7ec0665326121b45bbf0af6dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710768113231529083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90,PodSandboxId:3b02456c19e4132b342660c2e01b7b06f1dbc9fba8e4647697bdbb5dab719935,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710768113129840110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f93
83385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82,PodSandboxId:6d5f4d2062041e36e8c83cdaef76c88a7e45e9030d18821e9965db48dcab272f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710768113148301449,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282f67810906176442b1ebeaef8ea17d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f,PodSandboxId:c3736fc3b42858eff89a728d8815e0748b6bc0557cb21e0efbd90a21ecf82be5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710768113134214782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{i
o.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9,PodSandboxId:f9ec5f3d1519533332fbcb8b3adf171fdbed8398034c37a8f895ab308424a700,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710768112831896821,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67e544b-41f2-4be4-90ed-971378c82a76,},Annotations:map[string]string{io.kubernetes.
container.hash: cf352393,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459,PodSandboxId:446c946431e4d5d0104c3fd8afa610dc8166a3e11d2539fd9afbeb52d415acf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710768112978305364,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ac151e5f24dd4f2efdd1ee0628307c8,},Annotations:map[string]string{io.kubern
etes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8695cf0ac16e0c699161c8d6757608a3b09833cf6b0ac3aa257a9f9443bdb1,PodSandboxId:750ec46160c5a4f82bf4f8730a23910b18ea84983a68a9bc37e88e75b6640752,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710767913024614151,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c504e52bc56157cc75ee682693876de8,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f97ca3edceee5734235099145f0d559609f0041b72adf1d17c1bf6663afed,PodSandboxId:a2d21119e214a24fbc0a68a25ba518b4c30ec66e8b2035c8f1c91ec39ff0b94b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767609255434077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-h4q2t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19f21998-36db-4286-8e31-bf260f71ea46,},Annotations:map[string]string{io.kubernetes.container.hash: 852cc6f5,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945,PodSandboxId:3daf97324e58a07a16c31e1352ffe06eb20778fd71415f1c41d1caaeadcf2818,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297552095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pbr9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b011a4b6-807e-4af3-90f5-bc9af8ccd454,},Annotations:map[string]string{io.kubernetes.container.hash: 270d3a20,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67,PodSandboxId:0b6911927b37f40d650b7b3cbb708f276a497e06e4d52dbe935c788521f1bea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767454297898177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-f6dtz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 78994887-c343-49aa-bc5d-e099da752ad6,},Annotations:map[string]string{io.kubernetes.container.hash: 49d6e474,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1,PodSandboxId:c4b520f79bf4b096ac3a3b8ab71235c7db9f9a1e50204df3b47afe7b0c299f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710767450094191982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-97vsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d03704-5a4b-4973-b178-912218d00802,},Annotations:map[string]string{io.kubernetes.container.hash: e1e176a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99,PodSandboxId:6e0049bc30922673596c36789588d89609ab39ee56610019737a47e27c5913b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXI
TED,CreatedAt:1710767428281748509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a417deef6e3d8a8645cbf67a3c4710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7,PodSandboxId:c9e7a1111cb30c7e8f513575a0f13fbfb234410857f762e19bc15f73907eb26e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:171076742813612617
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-942957,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d65e66a53e070453c2f9383385fd98,},Annotations:map[string]string{io.kubernetes.container.hash: 5175769b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df98c724-8ef8-4a7a-ba40-8e80faf9a03a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4599693f823e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   f9ec5f3d15195       storage-provisioner
	cb03720fabfe4       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               3                   d8718ce51df7d       kindnet-6rgvl
	e302d6170ad88       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   2                   446c946431e4d       kube-controller-manager-ha-942957
	3d89e600a2a98       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   6d5f4d2062041       kube-apiserver-ha-942957
	d0e9ab707f966       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   a18c3ff2ffa86       busybox-5b5d89c9d6-h4q2t
	8f0b795d2bdf5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   f9a167f3b0217       kube-proxy-97vsd
	b1d58a64362f8       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  3                   d33166cce5675       kube-vip-ha-942957
	355dac9c183ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   1fb74babe23fc       coredns-5dd5756b68-f6dtz
	f4790438e5d49       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   52ff694a82f3e       coredns-5dd5756b68-pbr9j
	7ead34fbee6f7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   d8718ce51df7d       kindnet-6rgvl
	9c83f6d14628f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   6d5f4d2062041       kube-apiserver-ha-942957
	5051b1a59a5bf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   c3736fc3b4285       kube-scheduler-ha-942957
	dcec3d819d32c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   3b02456c19e41       etcd-ha-942957
	96898006f26bb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   1                   446c946431e4d       kube-controller-manager-ha-942957
	4e9f17e03f23a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   f9ec5f3d15195       storage-provisioner
	0b8695cf0ac16       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      8 minutes ago       Exited              kube-vip                  2                   750ec46160c5a       kube-vip-ha-942957
	bc6f97ca3edce       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a2d21119e214a       busybox-5b5d89c9d6-h4q2t
	c859be2ef6bde       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   0b6911927b37f       coredns-5dd5756b68-f6dtz
	e2cf377b129d8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   3daf97324e58a       coredns-5dd5756b68-pbr9j
	11bc6358bf6d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   c4b520f79bf4b       kube-proxy-97vsd
	09364d1b0b8ec       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      16 minutes ago      Exited              kube-scheduler            0                   6e0049bc30922       kube-scheduler-ha-942957
	ac909d1fea8aa       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      16 minutes ago      Exited              etcd                      0                   c9e7a1111cb30       etcd-ha-942957
	
	
	==> coredns [355dac9c183eaf505cd72808ccccdacf517f30ad0d26d6960f5d4e3094471652] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47177 - 46844 "HINFO IN 3095807628460413902.4050584119888136970. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.07791038s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45980->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c859be2ef6bde747f1589c46bc542432f0c6e34275be27bfffda08b2676b4b67] <==
	[INFO] 10.244.0.4:59741 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000711436s
	[INFO] 10.244.1.2:33325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003796173s
	[INFO] 10.244.1.2:40118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184538s
	[INFO] 10.244.1.2:38695 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158047s
	[INFO] 10.244.2.2:39278 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539379s
	[INFO] 10.244.2.2:48574 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165918s
	[INFO] 10.244.0.4:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.0.4:50001 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135799s
	[INFO] 10.244.0.4:49373 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159584s
	[INFO] 10.244.1.2:44441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118463s
	[INFO] 10.244.2.2:42552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221661s
	[INFO] 10.244.2.2:46062 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090758s
	[INFO] 10.244.0.4:53179 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092569s
	[INFO] 10.244.1.2:45351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128077s
	[INFO] 10.244.1.2:52758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144551s
	[INFO] 10.244.1.2:47551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000203433s
	[INFO] 10.244.2.2:53980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115616s
	[INFO] 10.244.2.2:55318 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000181469s
	[INFO] 10.244.0.4:60630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069346s
	[INFO] 10.244.0.4:41251 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040242s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2cf377b129d85a2fcf1f099fb61a211f2e2ad54f1f48527d9bf8eaa4ff29945] <==
	[INFO] 10.244.2.2:46720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002275748s
	[INFO] 10.244.2.2:50733 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275044s
	[INFO] 10.244.2.2:37004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138849s
	[INFO] 10.244.2.2:33563 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000224767s
	[INFO] 10.244.2.2:42566 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017421s
	[INFO] 10.244.0.4:54486 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00168008s
	[INFO] 10.244.0.4:46746 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363608s
	[INFO] 10.244.0.4:38530 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231105s
	[INFO] 10.244.0.4:47152 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045351s
	[INFO] 10.244.0.4:57247 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070307s
	[INFO] 10.244.1.2:43996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140398s
	[INFO] 10.244.1.2:36237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220389s
	[INFO] 10.244.1.2:37302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111738s
	[INFO] 10.244.2.2:58342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134629s
	[INFO] 10.244.2.2:43645 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160061s
	[INFO] 10.244.0.4:58375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210567s
	[INFO] 10.244.0.4:50302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075795s
	[INFO] 10.244.0.4:46012 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084361s
	[INFO] 10.244.1.2:37085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000242114s
	[INFO] 10.244.2.2:47856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000192734s
	[INFO] 10.244.2.2:42553 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213437s
	[INFO] 10.244.0.4:53951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102273s
	[INFO] 10.244.0.4:44758 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071111s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4790438e5d4965634666c4e3a565cc62de39f46d29a13d541d44c19afd87e9b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42325 - 29984 "HINFO IN 4194563739134571877.5442077711432167674. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046508538s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54200->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-942957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_10_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:10:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:26:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:22:35 +0000   Mon, 18 Mar 2024 13:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-942957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 98d7d2d7e6f44e39a7470fa399e42587
	  System UUID:                98d7d2d7-e6f4-4e39-a747-0fa399e42587
	  Boot ID:                    8d77322f-23ab-4abb-a476-3a13d0f588c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h4q2t             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-f6dtz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-pbr9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-942957                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-6rgvl                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-942957             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-942957    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-97vsd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-942957             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-942957                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m20s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-942957 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-942957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-942957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-942957 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Warning  ContainerGCFailed        5m23s (x2 over 6m23s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-942957 event: Registered Node ha-942957 in Controller
	
	
	Name:               ha-942957-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_12_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:11:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:26:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:23:20 +0000   Mon, 18 Mar 2024 13:22:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-942957-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 effa4806d9ac4aae93234a5f4797b41e
	  System UUID:                effa4806-d9ac-4aae-9323-4a5f4797b41e
	  Boot ID:                    0d8480b6-af1f-4533-9aa2-3ade23cb65c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9qmdx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-942957-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-d4smn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-942957-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-942957-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vjmnr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-942957-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-942957-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  RegisteredNode           14m                    node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-942957-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node ha-942957-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node ha-942957-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node ha-942957-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-942957-m02 event: Registered Node ha-942957-m02 in Controller
	
	
	Name:               ha-942957-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-942957-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=ha-942957
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_14_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:14:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-942957-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:24:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:25:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:25:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:25:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 13:24:09 +0000   Mon, 18 Mar 2024 13:25:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ha-942957-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16089a645be4a78a5280af4bb880ea8
	  System UUID:                b16089a6-45be-4a78-a528-0af4bb880ea8
	  Boot ID:                    591f9bf4-0d70-41f4-9e0f-5e273cf420c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-kbw6x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-g4lxl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-gjnnp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)      kubelet          Node ha-942957-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)      kubelet          Node ha-942957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)      kubelet          Node ha-942957-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-942957-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-942957-m04 event: Registered Node ha-942957-m04 in Controller
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-942957-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-942957-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-942957-m04 has been rebooted, boot id: 591f9bf4-0d70-41f4-9e0f-5e273cf420c1
	  Normal   NodeReady                2m48s                  kubelet          Node ha-942957-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m27s)   node-controller  Node ha-942957-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.067435] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059503] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.165737] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.136769] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.243119] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.843891] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.062146] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.956739] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.288333] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.601273] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.093539] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.596662] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.054967] kauditd_printk_skb: 53 callbacks suppressed
	[Mar18 13:11] kauditd_printk_skb: 11 callbacks suppressed
	[Mar18 13:19] kauditd_printk_skb: 1 callbacks suppressed
	[Mar18 13:21] systemd-fstab-generator[3918]: Ignoring "noauto" option for root device
	[  +0.174625] systemd-fstab-generator[3930]: Ignoring "noauto" option for root device
	[  +0.208340] systemd-fstab-generator[3944]: Ignoring "noauto" option for root device
	[  +0.167312] systemd-fstab-generator[3956]: Ignoring "noauto" option for root device
	[  +0.280222] systemd-fstab-generator[3980]: Ignoring "noauto" option for root device
	[  +5.424372] systemd-fstab-generator[4080]: Ignoring "noauto" option for root device
	[  +0.095840] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.302809] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 13:22] kauditd_printk_skb: 95 callbacks suppressed
	[ +23.260712] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [ac909d1fea8aaab9a88f3d5d592a39bfe90bb562e2a4ca524e32a9b1b44523b7] <==
	WARNING: 2024/03/18 13:20:09 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:20:09.338079Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:20:08.960912Z","time spent":"377.156073ms","remote":"127.0.0.1:51394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	WARNING: 2024/03/18 13:20:09 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:20:09.338198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:20:01.720914Z","time spent":"7.615311874s","remote":"127.0.0.1:51056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:10000 "}
	WARNING: 2024/03/18 13:20:09 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:20:09.389644Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:20:09.389852Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:20:09.389944Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"821abe7be15f44a3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-18T13:20:09.390181Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390249Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390281Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390428Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390609Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390643Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"bb6cd31aaba3b75c"}
	{"level":"info","ts":"2024-03-18T13:20:09.390763Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.390813Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.390851Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.390953Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.39103Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.391146Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.391204Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:20:09.394132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-18T13:20:09.394324Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-18T13:20:09.394361Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-942957","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	
	
	==> etcd [dcec3d819d32ce0be5c1b7b8abda9244523879a1788b2233fe934760b1126d90] <==
	{"level":"info","ts":"2024-03-18T13:23:34.799061Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.799856Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.812131Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"8fb0b67bf02b5ef3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T13:23:34.812248Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.815614Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"821abe7be15f44a3","to":"8fb0b67bf02b5ef3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T13:23:34.815791Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:23:34.815993Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.649646Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.135:54080","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-18T13:24:22.66143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 switched to configuration voters=(9375015013596480675 13505401494079518556)"}
	{"level":"info","ts":"2024-03-18T13:24:22.661762Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","removed-remote-peer-id":"8fb0b67bf02b5ef3","removed-remote-peer-urls":["https://192.168.39.135:2380"]}
	{"level":"info","ts":"2024-03-18T13:24:22.661868Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.662498Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:24:22.662566Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.663231Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:24:22.663292Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:24:22.663707Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.663869Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3","error":"context canceled"}
	{"level":"warn","ts":"2024-03-18T13:24:22.663946Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8fb0b67bf02b5ef3","error":"failed to read 8fb0b67bf02b5ef3 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-18T13:24:22.664051Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.664144Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3","error":"context canceled"}
	{"level":"info","ts":"2024-03-18T13:24:22.664163Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"821abe7be15f44a3","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:24:22.664184Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"info","ts":"2024-03-18T13:24:22.6642Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"821abe7be15f44a3","removed-remote-peer-id":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.673981Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"821abe7be15f44a3","remote-peer-id-stream-handler":"821abe7be15f44a3","remote-peer-id-from":"8fb0b67bf02b5ef3"}
	{"level":"warn","ts":"2024-03-18T13:24:22.677526Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"821abe7be15f44a3","remote-peer-id-stream-handler":"821abe7be15f44a3","remote-peer-id-from":"8fb0b67bf02b5ef3"}
	
	
	==> kernel <==
	 13:26:57 up 17 min,  0 users,  load average: 0.26, 0.44, 0.35
	Linux ha-942957 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b] <==
	I0318 13:21:53.994914       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:22:11.802299       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 13:22:17.946223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.55:36934->10.96.0.1:443: read: connection reset by peer
	I0318 13:22:21.018287       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 13:22:27.163619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 13:22:33.308268       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [cb03720fabfe408fd30b89414d0d98f5f4bb3691e6b3750ded072923397f5915] <==
	I0318 13:26:14.297976       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:26:24.313338       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:26:24.313488       1 main.go:227] handling current node
	I0318 13:26:24.313578       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:26:24.313611       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:26:24.313805       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:26:24.313840       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:26:34.321581       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:26:34.321954       1 main.go:227] handling current node
	I0318 13:26:34.322066       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:26:34.322094       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:26:34.322208       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:26:34.322228       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:26:44.332192       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:26:44.332356       1 main.go:227] handling current node
	I0318 13:26:44.332394       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:26:44.332420       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:26:44.332633       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:26:44.332733       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	I0318 13:26:54.342114       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0318 13:26:54.342277       1 main.go:227] handling current node
	I0318 13:26:54.342311       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0318 13:26:54.342422       1 main.go:250] Node ha-942957-m02 has CIDR [10.244.1.0/24] 
	I0318 13:26:54.342709       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I0318 13:26:54.342845       1 main.go:250] Node ha-942957-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3d89e600a2a98b1500584b038f2146960362b64eb3f2196d13980dfdd501984f] <==
	I0318 13:22:35.248193       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:22:35.248311       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:22:35.251052       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:22:35.251141       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:22:35.251239       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:22:35.339613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:22:35.339755       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:22:35.339831       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:22:35.339838       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:22:35.340042       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:22:35.345746       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:22:35.430477       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:22:35.432313       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:22:35.432460       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:22:35.433461       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:22:35.433591       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:22:35.434742       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:22:35.439737       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0318 13:22:35.452183       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.135]
	I0318 13:22:35.456750       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:22:35.468980       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0318 13:22:35.473926       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0318 13:22:36.250266       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 13:22:37.107791       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.135 192.168.39.68]
	W0318 13:24:37.128469       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.68]
	
	
	==> kube-apiserver [9c83f6d14628f38cfecaf08a1c77500ec15d1305da4bad5aa27aec23b1931a82] <==
	I0318 13:21:54.149792       1 options.go:220] external host was not specified, using 192.168.39.68
	I0318 13:21:54.151137       1 server.go:148] Version: v1.28.4
	I0318 13:21:54.151204       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:21:55.027788       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:21:55.037111       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:21:55.037261       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:21:55.037570       1 instance.go:298] Using reconciler: lease
	W0318 13:22:15.025642       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0318 13:22:15.026077       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0318 13:22:15.038863       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [96898006f26bb34adf8d6605356268848814dcfae484bd2e03f87a657d58c459] <==
	I0318 13:21:54.879794       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:21:55.220513       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:21:55.220712       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:21:55.222926       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:21:55.223129       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:21:55.224392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:21:55.224510       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0318 13:22:16.045195       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.68:8443/healthz\": dial tcp 192.168.39.68:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e302d6170ad889e67d47fda9faec591edcf2b4b9ecb26c4278871f72aee01329] <==
	I0318 13:24:19.573136       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-4nd4f"
	I0318 13:24:19.638981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.110262ms"
	I0318 13:24:19.639843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="187.966µs"
	I0318 13:24:21.454308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="69.558µs"
	I0318 13:24:21.705999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="87.636µs"
	I0318 13:24:21.724485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="127.854µs"
	I0318 13:24:21.737925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="99.493µs"
	I0318 13:24:21.809868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.01746ms"
	I0318 13:24:21.811641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.276µs"
	I0318 13:24:34.498444       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-942957-m04"
	E0318 13:24:34.539577       1 garbagecollector.go:392] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-942957-m03", UID:"99192b15-857e-4dfc-a335-c068b00d4563", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-942957-m03", UID:"64042636-220c-4667-aa0f-64ee02cef2a3", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-942957-m03" not found
	E0318 13:24:34.547386       1 garbagecollector.go:392] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-942957-m03", UID:"fb043ca7-c0d1-4e84-8775-7c1f7ad6869a", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-942957-m03", UID:"64042636-220c-4667-aa0f-64ee02cef2a3", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-942957-m03" not found
	I0318 13:24:35.839610       1 event.go:307] "Event occurred" object="ha-942957-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-942957-m03 event: Removing Node ha-942957-m03 from Controller"
	E0318 13:24:50.833784       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:24:50.833839       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:24:50.833860       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:24:50.833866       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:24:50.833872       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	I0318 13:25:10.491805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="24.683348ms"
	I0318 13:25:10.495831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="186.565µs"
	E0318 13:25:10.834467       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:25:10.834927       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:25:10.835030       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:25:10.835060       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	E0318 13:25:10.835085       1 gc_controller.go:153] "Failed to get node" err="node \"ha-942957-m03\" not found" node="ha-942957-m03"
	
	
	==> kube-proxy [11bc6358bf6d2aeb493acc70710632762f39f9b7aee1949337a3f69e416091a1] <==
	E0318 13:18:43.482515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:18:51.418088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:18:51.418179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:18:51.418262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:18:51.418279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:18:51.418325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:18:51.418349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:01.850261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:01.850523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:04.924160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:04.924257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:04.924392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:04.924506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:17.212825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:17.212963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:20.282769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:20.283116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:19:29.498361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:19:29.498439       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:20:00.218457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:20:00.218547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-942957&resourceVersion=1728": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:20:03.291311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:20:03.291771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 13:20:06.362250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:20:06.362359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [8f0b795d2bdf59643abf48253311965d7b9a4105fca8cbf0df38de022edfa637] <==
	I0318 13:21:55.276955       1 server_others.go:69] "Using iptables proxy"
	E0318 13:21:56.954412       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:00.028061       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:03.098907       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:09.243751       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 13:22:18.458873       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-942957": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 13:22:36.450635       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0318 13:22:36.543046       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:22:36.543099       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:22:36.549040       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:22:36.549238       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:22:36.549747       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:22:36.549784       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:22:36.556204       1 config.go:188] "Starting service config controller"
	I0318 13:22:36.556296       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:22:36.556332       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:22:36.556336       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:22:36.557112       1 config.go:315] "Starting node config controller"
	I0318 13:22:36.557146       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:22:36.657265       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:22:36.657344       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:22:36.657400       1 shared_informer.go:318] Caches are synced for endpoint slice config
	W0318 13:25:21.473234       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0318 13:25:21.473414       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0318 13:25:21.473237       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [09364d1b0b8ec5618ab74ff8cbb8d803949187b2a31e74e6c9dfb9759065cb99] <==
	W0318 13:20:01.816895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:20:01.816952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:20:02.136991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:20:02.137098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:20:02.142599       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:20:02.142696       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:20:02.425796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:20:02.425899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:20:02.514504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:20:02.514602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:20:02.655359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:20:02.655461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:20:03.140737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:20:03.140830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:20:03.320123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:20:03.320257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:20:03.320301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:20:03.320322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:20:03.587024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:20:03.587132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:20:08.719011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 13:20:08.719077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:20:09.281134       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:20:09.281492       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 13:20:09.282223       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5051b1a59a5bff141d1b26536b335d46317ed445c531f56a9d47fcf96874074f] <==
	W0318 13:22:31.128384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.68:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:31.128466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.68:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:31.133125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:31.133225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:31.570849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.68:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:31.570928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.68:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.614225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.68:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.614344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.68:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.687235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.687316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.68:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.908338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.908455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:32.973804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:32.973924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.68:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:33.006970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	E0318 13:22:33.007007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.68:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.68:8443: connect: connection refused
	W0318 13:22:35.318525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:22:35.318583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:22:35.318748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:22:35.318762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:22:35.318812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:22:35.318845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:22:35.319048       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:22:35.319154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:22:35.853167       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:22:58 ha-942957 kubelet[1368]: E0318 13:22:58.997447    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b67e544b-41f2-4be4-90ed-971378c82a76)\"" pod="kube-system/storage-provisioner" podUID="b67e544b-41f2-4be4-90ed-971378c82a76"
	Mar 18 13:23:02 ha-942957 kubelet[1368]: I0318 13:23:02.995367    1368 scope.go:117] "RemoveContainer" containerID="7ead34fbee6f7bd019b42e0d74c7aef7014c3cf48c79e347e2bf4cb1cb4e068b"
	Mar 18 13:23:09 ha-942957 kubelet[1368]: I0318 13:23:09.995971    1368 scope.go:117] "RemoveContainer" containerID="4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9"
	Mar 18 13:23:09 ha-942957 kubelet[1368]: E0318 13:23:09.996936    1368 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b67e544b-41f2-4be4-90ed-971378c82a76)\"" pod="kube-system/storage-provisioner" podUID="b67e544b-41f2-4be4-90ed-971378c82a76"
	Mar 18 13:23:24 ha-942957 kubelet[1368]: I0318 13:23:24.995908    1368 scope.go:117] "RemoveContainer" containerID="4e9f17e03f23a71a6e03b2c53760211406fee7a5be416b57c5251796f67391a9"
	Mar 18 13:23:35 ha-942957 kubelet[1368]: E0318 13:23:35.072400    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:23:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:23:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:23:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:23:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:24:35 ha-942957 kubelet[1368]: E0318 13:24:35.070440    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:24:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:24:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:24:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:24:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:25:35 ha-942957 kubelet[1368]: E0318 13:25:35.069435    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:25:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:25:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:25:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:25:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:26:35 ha-942957 kubelet[1368]: E0318 13:26:35.072754    1368 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:26:35 ha-942957 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:26:35 ha-942957 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:26:35 ha-942957 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:26:35 ha-942957 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:26:56.596754 1094056 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18427-1067917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-942957 -n ha-942957
helpers_test.go:261: (dbg) Run:  kubectl --context ha-942957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (309.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-994669
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-994669
E0318 13:42:37.318709 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:44:17.919313 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-994669: exit status 82 (2m2.727219746s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-994669-m03"  ...
	* Stopping node "multinode-994669-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-994669" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-994669 --wait=true -v=8 --alsologtostderr
E0318 13:47:20.371036 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:47:20.964435 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-994669 --wait=true -v=8 --alsologtostderr: (3m4.546842153s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-994669
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-994669 -n multinode-994669
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-994669 logs -n 25: (1.615367535s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile486103846/001/cp-test_multinode-994669-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669:/home/docker/cp-test_multinode-994669-m02_multinode-994669.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669 sudo cat                                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m02_multinode-994669.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03:/home/docker/cp-test_multinode-994669-m02_multinode-994669-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669-m03 sudo cat                                   | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m02_multinode-994669-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp testdata/cp-test.txt                                                | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile486103846/001/cp-test_multinode-994669-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669:/home/docker/cp-test_multinode-994669-m03_multinode-994669.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669 sudo cat                                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m03_multinode-994669.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02:/home/docker/cp-test_multinode-994669-m03_multinode-994669-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669-m02 sudo cat                                   | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m03_multinode-994669-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-994669 node stop m03                                                          | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| node    | multinode-994669 node start                                                             | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-994669                                                                | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	| stop    | -p multinode-994669                                                                     | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	| start   | -p multinode-994669                                                                     | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-994669                                                                | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:44:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:44:19.199941 1102226 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:44:19.200230 1102226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:44:19.200240 1102226 out.go:304] Setting ErrFile to fd 2...
	I0318 13:44:19.200245 1102226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:44:19.200438 1102226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:44:19.201009 1102226 out.go:298] Setting JSON to false
	I0318 13:44:19.202053 1102226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":19606,"bootTime":1710749853,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:44:19.202127 1102226 start.go:139] virtualization: kvm guest
	I0318 13:44:19.206265 1102226 out.go:177] * [multinode-994669] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:44:19.208081 1102226 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:44:19.208041 1102226 notify.go:220] Checking for updates...
	I0318 13:44:19.209533 1102226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:44:19.210959 1102226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:44:19.212418 1102226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:44:19.213841 1102226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:44:19.215181 1102226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:44:19.216918 1102226 config.go:182] Loaded profile config "multinode-994669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:44:19.217025 1102226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:44:19.217529 1102226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:44:19.217580 1102226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:44:19.235210 1102226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0318 13:44:19.235731 1102226 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:44:19.236331 1102226 main.go:141] libmachine: Using API Version  1
	I0318 13:44:19.236356 1102226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:44:19.236695 1102226 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:44:19.236917 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:44:19.272789 1102226 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:44:19.274275 1102226 start.go:297] selected driver: kvm2
	I0318 13:44:19.274307 1102226 start.go:901] validating driver "kvm2" against &{Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:44:19.274507 1102226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:44:19.274935 1102226 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:44:19.275059 1102226 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:44:19.290693 1102226 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:44:19.291663 1102226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:44:19.291748 1102226 cni.go:84] Creating CNI manager for ""
	I0318 13:44:19.291764 1102226 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:44:19.291864 1102226 start.go:340] cluster config:
	{Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-994669 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:44:19.292082 1102226 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:44:19.293889 1102226 out.go:177] * Starting "multinode-994669" primary control-plane node in "multinode-994669" cluster
	I0318 13:44:19.295066 1102226 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:44:19.295106 1102226 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:44:19.295117 1102226 cache.go:56] Caching tarball of preloaded images
	I0318 13:44:19.295191 1102226 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:44:19.295203 1102226 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:44:19.295326 1102226 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/config.json ...
	I0318 13:44:19.295556 1102226 start.go:360] acquireMachinesLock for multinode-994669: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:44:19.295604 1102226 start.go:364] duration metric: took 27.674µs to acquireMachinesLock for "multinode-994669"
	I0318 13:44:19.295620 1102226 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:44:19.295626 1102226 fix.go:54] fixHost starting: 
	I0318 13:44:19.295908 1102226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:44:19.295941 1102226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:44:19.310477 1102226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0318 13:44:19.310941 1102226 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:44:19.311399 1102226 main.go:141] libmachine: Using API Version  1
	I0318 13:44:19.311419 1102226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:44:19.311762 1102226 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:44:19.312007 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:44:19.312212 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetState
	I0318 13:44:19.314116 1102226 fix.go:112] recreateIfNeeded on multinode-994669: state=Running err=<nil>
	W0318 13:44:19.314149 1102226 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:44:19.316767 1102226 out.go:177] * Updating the running kvm2 "multinode-994669" VM ...
	I0318 13:44:19.317975 1102226 machine.go:94] provisionDockerMachine start ...
	I0318 13:44:19.317996 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:44:19.318220 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.320746 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.321187 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.321218 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.321314 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.321504 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.321639 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.321794 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.321949 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.322142 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.322155 1102226 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:44:19.441727 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994669
	
	I0318 13:44:19.441761 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetMachineName
	I0318 13:44:19.442032 1102226 buildroot.go:166] provisioning hostname "multinode-994669"
	I0318 13:44:19.442060 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetMachineName
	I0318 13:44:19.442290 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.445337 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.445739 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.445781 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.445965 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.446167 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.446358 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.446516 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.446739 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.446965 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.446983 1102226 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-994669 && echo "multinode-994669" | sudo tee /etc/hostname
	I0318 13:44:19.578079 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994669
	
	I0318 13:44:19.578117 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.581055 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.581434 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.581500 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.581691 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.581915 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.582094 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.582236 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.582434 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.582614 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.582631 1102226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-994669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-994669/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-994669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:44:19.705336 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:44:19.705367 1102226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:44:19.705392 1102226 buildroot.go:174] setting up certificates
	I0318 13:44:19.705403 1102226 provision.go:84] configureAuth start
	I0318 13:44:19.705412 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetMachineName
	I0318 13:44:19.705697 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:44:19.708563 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.708988 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.709016 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.709181 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.711417 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.711777 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.711811 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.711966 1102226 provision.go:143] copyHostCerts
	I0318 13:44:19.712014 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:44:19.712056 1102226 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:44:19.712065 1102226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:44:19.712131 1102226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:44:19.712204 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:44:19.712221 1102226 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:44:19.712228 1102226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:44:19.712252 1102226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:44:19.712289 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:44:19.712310 1102226 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:44:19.712316 1102226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:44:19.712336 1102226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:44:19.712379 1102226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.multinode-994669 san=[127.0.0.1 192.168.39.57 localhost minikube multinode-994669]
	I0318 13:44:19.769536 1102226 provision.go:177] copyRemoteCerts
	I0318 13:44:19.769608 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:44:19.769635 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.772426 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.772783 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.772811 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.773038 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.773260 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.773410 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.773542 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:44:19.865378 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:44:19.865469 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 13:44:19.894383 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:44:19.894448 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:44:19.926069 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:44:19.926159 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:44:19.962159 1102226 provision.go:87] duration metric: took 256.743488ms to configureAuth
	I0318 13:44:19.962189 1102226 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:44:19.962455 1102226 config.go:182] Loaded profile config "multinode-994669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:44:19.962554 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.965329 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.965809 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.965854 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.965996 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.966198 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.966392 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.966545 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.966717 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.966891 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.966905 1102226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:45:50.678830 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:45:50.678862 1102226 machine.go:97] duration metric: took 1m31.360872215s to provisionDockerMachine
	I0318 13:45:50.678878 1102226 start.go:293] postStartSetup for "multinode-994669" (driver="kvm2")
	I0318 13:45:50.678893 1102226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:45:50.678924 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.679295 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:45:50.679326 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.682700 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.683116 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.683158 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.683299 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.683496 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.683658 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.683876 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:45:50.772430 1102226 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:45:50.776936 1102226 command_runner.go:130] > NAME=Buildroot
	I0318 13:45:50.776953 1102226 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:45:50.776957 1102226 command_runner.go:130] > ID=buildroot
	I0318 13:45:50.776961 1102226 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:45:50.776966 1102226 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:45:50.776994 1102226 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:45:50.777007 1102226 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:45:50.777066 1102226 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:45:50.777150 1102226 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:45:50.777161 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:45:50.777265 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:45:50.787501 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:45:50.813020 1102226 start.go:296] duration metric: took 134.123266ms for postStartSetup
	I0318 13:45:50.813076 1102226 fix.go:56] duration metric: took 1m31.517450336s for fixHost
	I0318 13:45:50.813102 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.816199 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.816549 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.816591 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.816701 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.816909 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.817105 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.817233 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.817386 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:50.817561 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:45:50.817572 1102226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:45:50.928786 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769550.906873363
	
	I0318 13:45:50.928815 1102226 fix.go:216] guest clock: 1710769550.906873363
	I0318 13:45:50.928823 1102226 fix.go:229] Guest: 2024-03-18 13:45:50.906873363 +0000 UTC Remote: 2024-03-18 13:45:50.813081995 +0000 UTC m=+91.663053370 (delta=93.791368ms)
	I0318 13:45:50.928861 1102226 fix.go:200] guest clock delta is within tolerance: 93.791368ms
	I0318 13:45:50.928869 1102226 start.go:83] releasing machines lock for "multinode-994669", held for 1m31.633255129s
	I0318 13:45:50.928890 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.929204 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:45:50.932364 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.932843 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.932880 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.933021 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.933609 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.933845 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.933968 1102226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:45:50.934014 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.934107 1102226 ssh_runner.go:195] Run: cat /version.json
	I0318 13:45:50.934135 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.936967 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.937312 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.937345 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.937366 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.937534 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.937726 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.937889 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.937926 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.937952 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.938052 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:45:50.938076 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.938191 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.938307 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.938418 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:45:51.053214 1102226 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:45:51.054036 1102226 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 13:45:51.054237 1102226 ssh_runner.go:195] Run: systemctl --version
	I0318 13:45:51.060242 1102226 command_runner.go:130] > systemd 252 (252)
	I0318 13:45:51.060277 1102226 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 13:45:51.060478 1102226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:45:51.220374 1102226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:45:51.231207 1102226 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 13:45:51.231297 1102226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:45:51.231370 1102226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:45:51.242179 1102226 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:45:51.242212 1102226 start.go:494] detecting cgroup driver to use...
	I0318 13:45:51.242293 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:45:51.260556 1102226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:45:51.276925 1102226 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:45:51.277000 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:45:51.293505 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:45:51.308566 1102226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:45:51.463621 1102226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:45:51.603848 1102226 docker.go:233] disabling docker service ...
	I0318 13:45:51.603928 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:45:51.622182 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:45:51.637427 1102226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:45:51.775666 1102226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:45:51.919491 1102226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:45:51.936464 1102226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:45:51.956829 1102226 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0318 13:45:51.957218 1102226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:45:51.957285 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:51.969747 1102226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:45:51.969829 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:51.981655 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:51.993332 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:52.005015 1102226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:45:52.017136 1102226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:45:52.027437 1102226 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:45:52.027561 1102226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:45:52.038895 1102226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:52.176127 1102226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:45:54.775377 1102226 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.599203839s)
	I0318 13:45:54.775428 1102226 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:45:54.775494 1102226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:45:54.780512 1102226 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0318 13:45:54.780536 1102226 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:45:54.780551 1102226 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0318 13:45:54.780561 1102226 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:45:54.780570 1102226 command_runner.go:130] > Access: 2024-03-18 13:45:54.628328888 +0000
	I0318 13:45:54.780590 1102226 command_runner.go:130] > Modify: 2024-03-18 13:45:54.628328888 +0000
	I0318 13:45:54.780601 1102226 command_runner.go:130] > Change: 2024-03-18 13:45:54.628328888 +0000
	I0318 13:45:54.780607 1102226 command_runner.go:130] >  Birth: -
	I0318 13:45:54.780649 1102226 start.go:562] Will wait 60s for crictl version
	I0318 13:45:54.780713 1102226 ssh_runner.go:195] Run: which crictl
	I0318 13:45:54.784680 1102226 command_runner.go:130] > /usr/bin/crictl
	I0318 13:45:54.784764 1102226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:45:54.824404 1102226 command_runner.go:130] > Version:  0.1.0
	I0318 13:45:54.824428 1102226 command_runner.go:130] > RuntimeName:  cri-o
	I0318 13:45:54.824432 1102226 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0318 13:45:54.824437 1102226 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:45:54.824621 1102226 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:45:54.824693 1102226 ssh_runner.go:195] Run: crio --version
	I0318 13:45:54.854273 1102226 command_runner.go:130] > crio version 1.29.1
	I0318 13:45:54.854304 1102226 command_runner.go:130] > Version:        1.29.1
	I0318 13:45:54.854313 1102226 command_runner.go:130] > GitCommit:      unknown
	I0318 13:45:54.854318 1102226 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:45:54.854324 1102226 command_runner.go:130] > GitTreeState:   clean
	I0318 13:45:54.854329 1102226 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:45:54.854334 1102226 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:45:54.854338 1102226 command_runner.go:130] > Compiler:       gc
	I0318 13:45:54.854342 1102226 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:45:54.854346 1102226 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:45:54.854351 1102226 command_runner.go:130] > BuildTags:      
	I0318 13:45:54.854355 1102226 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:45:54.854360 1102226 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:45:54.854364 1102226 command_runner.go:130] >   btrfs_noversion
	I0318 13:45:54.854368 1102226 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:45:54.854376 1102226 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:45:54.854379 1102226 command_runner.go:130] >   seccomp
	I0318 13:45:54.854383 1102226 command_runner.go:130] > LDFlags:          unknown
	I0318 13:45:54.854387 1102226 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:45:54.854391 1102226 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:45:54.854460 1102226 ssh_runner.go:195] Run: crio --version
	I0318 13:45:54.883884 1102226 command_runner.go:130] > crio version 1.29.1
	I0318 13:45:54.883923 1102226 command_runner.go:130] > Version:        1.29.1
	I0318 13:45:54.883932 1102226 command_runner.go:130] > GitCommit:      unknown
	I0318 13:45:54.883939 1102226 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:45:54.883945 1102226 command_runner.go:130] > GitTreeState:   clean
	I0318 13:45:54.883954 1102226 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:45:54.883960 1102226 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:45:54.883965 1102226 command_runner.go:130] > Compiler:       gc
	I0318 13:45:54.883972 1102226 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:45:54.883979 1102226 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:45:54.883994 1102226 command_runner.go:130] > BuildTags:      
	I0318 13:45:54.884005 1102226 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:45:54.884015 1102226 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:45:54.884024 1102226 command_runner.go:130] >   btrfs_noversion
	I0318 13:45:54.884035 1102226 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:45:54.884044 1102226 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:45:54.884054 1102226 command_runner.go:130] >   seccomp
	I0318 13:45:54.884063 1102226 command_runner.go:130] > LDFlags:          unknown
	I0318 13:45:54.884070 1102226 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:45:54.884079 1102226 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:45:54.888270 1102226 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:45:54.889848 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:45:54.892709 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:54.893067 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:54.893096 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:54.893307 1102226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:45:54.897749 1102226 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0318 13:45:54.897836 1102226 kubeadm.go:877] updating cluster {Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:45:54.897969 1102226 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:54.898017 1102226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:45:54.951895 1102226 command_runner.go:130] > {
	I0318 13:45:54.951921 1102226 command_runner.go:130] >   "images": [
	I0318 13:45:54.951927 1102226 command_runner.go:130] >     {
	I0318 13:45:54.951940 1102226 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:45:54.951949 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.951958 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:45:54.951968 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.951974 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.951982 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:45:54.951990 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:45:54.951997 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952001 1102226 command_runner.go:130] >       "size": "65258016",
	I0318 13:45:54.952005 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952012 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952020 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952028 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952035 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952049 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952059 1102226 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:45:54.952066 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952074 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:45:54.952080 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952084 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952091 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:45:54.952106 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:45:54.952115 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952122 1102226 command_runner.go:130] >       "size": "65291810",
	I0318 13:45:54.952132 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952157 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952168 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952173 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952176 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952181 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952194 1102226 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:45:54.952204 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952216 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:45:54.952231 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952241 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952255 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:45:54.952266 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:45:54.952274 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952285 1102226 command_runner.go:130] >       "size": "1363676",
	I0318 13:45:54.952295 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952304 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952313 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952322 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952331 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952339 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952347 1102226 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:45:54.952352 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952359 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:45:54.952369 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952379 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952395 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:45:54.952418 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:45:54.952427 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952435 1102226 command_runner.go:130] >       "size": "31470524",
	I0318 13:45:54.952439 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952449 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952459 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952468 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952477 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952486 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952498 1102226 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:45:54.952508 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952517 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:45:54.952523 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952528 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952544 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:45:54.952559 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:45:54.952568 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952576 1102226 command_runner.go:130] >       "size": "53621675",
	I0318 13:45:54.952599 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952607 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952611 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952621 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952630 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952638 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952650 1102226 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:45:54.952659 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952769 1102226 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:45:54.952791 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952803 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952818 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:45:54.952835 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:45:54.952845 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952857 1102226 command_runner.go:130] >       "size": "295456551",
	I0318 13:45:54.952868 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.952879 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.952890 1102226 command_runner.go:130] >       },
	I0318 13:45:54.952901 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952911 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952922 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952933 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952950 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952965 1102226 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:45:54.952977 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952990 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:45:54.953001 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953012 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953028 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:45:54.953046 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:45:54.953057 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953071 1102226 command_runner.go:130] >       "size": "127226832",
	I0318 13:45:54.953083 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953094 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.953105 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953116 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953143 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953155 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953166 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953176 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953188 1102226 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:45:54.953199 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953213 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:45:54.953224 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953232 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953269 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:45:54.953284 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:45:54.953300 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953315 1102226 command_runner.go:130] >       "size": "123261750",
	I0318 13:45:54.953324 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953332 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.953342 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953351 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953361 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953367 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953374 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953385 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953397 1102226 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:45:54.953408 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953417 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:45:54.953424 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953431 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953444 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:45:54.953453 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:45:54.953458 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953465 1102226 command_runner.go:130] >       "size": "74749335",
	I0318 13:45:54.953472 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.953479 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953486 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953493 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953499 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953505 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953523 1102226 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:45:54.953531 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953538 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:45:54.953542 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953558 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953571 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:45:54.953584 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:45:54.953591 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953599 1102226 command_runner.go:130] >       "size": "61551410",
	I0318 13:45:54.953606 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953618 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.953624 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953634 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953642 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953653 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953660 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953671 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953703 1102226 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:45:54.953728 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953741 1102226 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:45:54.953751 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953758 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953775 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:45:54.953790 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:45:54.953801 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953809 1102226 command_runner.go:130] >       "size": "750414",
	I0318 13:45:54.953819 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953824 1102226 command_runner.go:130] >         "value": "65535"
	I0318 13:45:54.953830 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953837 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953849 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953858 1102226 command_runner.go:130] >       "pinned": true
	I0318 13:45:54.953874 1102226 command_runner.go:130] >     }
	I0318 13:45:54.953880 1102226 command_runner.go:130] >   ]
	I0318 13:45:54.953887 1102226 command_runner.go:130] > }
	I0318 13:45:54.954188 1102226 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:45:54.954207 1102226 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:45:54.954276 1102226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:45:54.994088 1102226 command_runner.go:130] > {
	I0318 13:45:54.994112 1102226 command_runner.go:130] >   "images": [
	I0318 13:45:54.994115 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994123 1102226 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:45:54.994127 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994133 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:45:54.994137 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994141 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994157 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:45:54.994170 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:45:54.994174 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994179 1102226 command_runner.go:130] >       "size": "65258016",
	I0318 13:45:54.994183 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994187 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994195 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994200 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994203 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994206 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994213 1102226 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:45:54.994219 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994225 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:45:54.994229 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994233 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994240 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:45:54.994248 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:45:54.994252 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994256 1102226 command_runner.go:130] >       "size": "65291810",
	I0318 13:45:54.994263 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994270 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994275 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994281 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994285 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994288 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994293 1102226 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:45:54.994298 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994303 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:45:54.994307 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994314 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994320 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:45:54.994327 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:45:54.994331 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994335 1102226 command_runner.go:130] >       "size": "1363676",
	I0318 13:45:54.994338 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994342 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994346 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994357 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994362 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994365 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994371 1102226 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:45:54.994376 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994381 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:45:54.994384 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994388 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994396 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:45:54.994410 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:45:54.994421 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994425 1102226 command_runner.go:130] >       "size": "31470524",
	I0318 13:45:54.994428 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994431 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994435 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994439 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994442 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994446 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994452 1102226 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:45:54.994456 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994461 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:45:54.994464 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994471 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994478 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:45:54.994485 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:45:54.994490 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994495 1102226 command_runner.go:130] >       "size": "53621675",
	I0318 13:45:54.994501 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994504 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994508 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994512 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994516 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994519 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994525 1102226 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:45:54.994537 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994542 1102226 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:45:54.994552 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994559 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994566 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:45:54.994575 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:45:54.994579 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994582 1102226 command_runner.go:130] >       "size": "295456551",
	I0318 13:45:54.994585 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994589 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994595 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994599 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994605 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994609 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994613 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994618 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994626 1102226 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:45:54.994632 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994637 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:45:54.994643 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994648 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994659 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:45:54.994670 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:45:54.994676 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994683 1102226 command_runner.go:130] >       "size": "127226832",
	I0318 13:45:54.994690 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994693 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994697 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994701 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994708 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994712 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994715 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994718 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994724 1102226 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:45:54.994730 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994736 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:45:54.994739 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994743 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994774 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:45:54.994785 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:45:54.994788 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994792 1102226 command_runner.go:130] >       "size": "123261750",
	I0318 13:45:54.994796 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994799 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994803 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994807 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994810 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994814 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994818 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994821 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994828 1102226 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:45:54.994832 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994837 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:45:54.994843 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994847 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994856 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:45:54.994863 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:45:54.994870 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994874 1102226 command_runner.go:130] >       "size": "74749335",
	I0318 13:45:54.994878 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994884 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994888 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994892 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994897 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994900 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994908 1102226 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:45:54.994912 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994919 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:45:54.994922 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994926 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994936 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:45:54.994946 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:45:54.994949 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994953 1102226 command_runner.go:130] >       "size": "61551410",
	I0318 13:45:54.994965 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994972 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994975 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994979 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994983 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994986 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994990 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994993 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994999 1102226 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:45:54.995004 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.995008 1102226 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:45:54.995013 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.995017 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.995026 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:45:54.995035 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:45:54.995039 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.995043 1102226 command_runner.go:130] >       "size": "750414",
	I0318 13:45:54.995049 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.995053 1102226 command_runner.go:130] >         "value": "65535"
	I0318 13:45:54.995056 1102226 command_runner.go:130] >       },
	I0318 13:45:54.995062 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.995066 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.995070 1102226 command_runner.go:130] >       "pinned": true
	I0318 13:45:54.995075 1102226 command_runner.go:130] >     }
	I0318 13:45:54.995078 1102226 command_runner.go:130] >   ]
	I0318 13:45:54.995082 1102226 command_runner.go:130] > }
	I0318 13:45:54.995675 1102226 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:45:54.995692 1102226 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:45:54.995709 1102226 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.28.4 crio true true} ...
	I0318 13:45:54.995839 1102226 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-994669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:45:54.995911 1102226 ssh_runner.go:195] Run: crio config
	I0318 13:45:55.047760 1102226 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0318 13:45:55.047793 1102226 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0318 13:45:55.047802 1102226 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0318 13:45:55.047807 1102226 command_runner.go:130] > #
	I0318 13:45:55.047817 1102226 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0318 13:45:55.047835 1102226 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0318 13:45:55.047860 1102226 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0318 13:45:55.047871 1102226 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0318 13:45:55.047877 1102226 command_runner.go:130] > # reload'.
	I0318 13:45:55.047883 1102226 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0318 13:45:55.047899 1102226 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0318 13:45:55.047910 1102226 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0318 13:45:55.047916 1102226 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0318 13:45:55.047920 1102226 command_runner.go:130] > [crio]
	I0318 13:45:55.047925 1102226 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0318 13:45:55.047931 1102226 command_runner.go:130] > # containers images, in this directory.
	I0318 13:45:55.047940 1102226 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0318 13:45:55.047957 1102226 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0318 13:45:55.048131 1102226 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0318 13:45:55.048154 1102226 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0318 13:45:55.048277 1102226 command_runner.go:130] > # imagestore = ""
	I0318 13:45:55.048302 1102226 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0318 13:45:55.048311 1102226 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0318 13:45:55.048396 1102226 command_runner.go:130] > storage_driver = "overlay"
	I0318 13:45:55.048411 1102226 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0318 13:45:55.048421 1102226 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0318 13:45:55.048427 1102226 command_runner.go:130] > storage_option = [
	I0318 13:45:55.048583 1102226 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0318 13:45:55.048612 1102226 command_runner.go:130] > ]
	I0318 13:45:55.048631 1102226 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0318 13:45:55.048644 1102226 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0318 13:45:55.048870 1102226 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0318 13:45:55.048885 1102226 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0318 13:45:55.048895 1102226 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0318 13:45:55.048905 1102226 command_runner.go:130] > # always happen on a node reboot
	I0318 13:45:55.049258 1102226 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0318 13:45:55.049281 1102226 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0318 13:45:55.049295 1102226 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0318 13:45:55.049303 1102226 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0318 13:45:55.049414 1102226 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0318 13:45:55.049431 1102226 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0318 13:45:55.049444 1102226 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0318 13:45:55.049648 1102226 command_runner.go:130] > # internal_wipe = true
	I0318 13:45:55.049664 1102226 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0318 13:45:55.049670 1102226 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0318 13:45:55.049904 1102226 command_runner.go:130] > # internal_repair = false
	I0318 13:45:55.049913 1102226 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0318 13:45:55.049919 1102226 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0318 13:45:55.049924 1102226 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0318 13:45:55.050324 1102226 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0318 13:45:55.050333 1102226 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0318 13:45:55.050337 1102226 command_runner.go:130] > [crio.api]
	I0318 13:45:55.050343 1102226 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0318 13:45:55.050602 1102226 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0318 13:45:55.050617 1102226 command_runner.go:130] > # IP address on which the stream server will listen.
	I0318 13:45:55.050854 1102226 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0318 13:45:55.050865 1102226 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0318 13:45:55.050871 1102226 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0318 13:45:55.051157 1102226 command_runner.go:130] > # stream_port = "0"
	I0318 13:45:55.051170 1102226 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0318 13:45:55.051427 1102226 command_runner.go:130] > # stream_enable_tls = false
	I0318 13:45:55.051438 1102226 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0318 13:45:55.051661 1102226 command_runner.go:130] > # stream_idle_timeout = ""
	I0318 13:45:55.051683 1102226 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0318 13:45:55.051693 1102226 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0318 13:45:55.051702 1102226 command_runner.go:130] > # minutes.
	I0318 13:45:55.051887 1102226 command_runner.go:130] > # stream_tls_cert = ""
	I0318 13:45:55.051904 1102226 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0318 13:45:55.051910 1102226 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0318 13:45:55.052194 1102226 command_runner.go:130] > # stream_tls_key = ""
	I0318 13:45:55.052211 1102226 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0318 13:45:55.052222 1102226 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0318 13:45:55.052254 1102226 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0318 13:45:55.052428 1102226 command_runner.go:130] > # stream_tls_ca = ""
	I0318 13:45:55.052451 1102226 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:45:55.052547 1102226 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0318 13:45:55.052564 1102226 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:45:55.052726 1102226 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0318 13:45:55.052736 1102226 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0318 13:45:55.052742 1102226 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0318 13:45:55.052745 1102226 command_runner.go:130] > [crio.runtime]
	I0318 13:45:55.052753 1102226 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0318 13:45:55.052763 1102226 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0318 13:45:55.052774 1102226 command_runner.go:130] > # "nofile=1024:2048"
	I0318 13:45:55.052784 1102226 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0318 13:45:55.052860 1102226 command_runner.go:130] > # default_ulimits = [
	I0318 13:45:55.053057 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.053067 1102226 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0318 13:45:55.053372 1102226 command_runner.go:130] > # no_pivot = false
	I0318 13:45:55.053386 1102226 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0318 13:45:55.053396 1102226 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0318 13:45:55.055125 1102226 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0318 13:45:55.055137 1102226 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0318 13:45:55.055142 1102226 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0318 13:45:55.055149 1102226 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:45:55.055157 1102226 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0318 13:45:55.055169 1102226 command_runner.go:130] > # Cgroup setting for conmon
	I0318 13:45:55.055181 1102226 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0318 13:45:55.055189 1102226 command_runner.go:130] > conmon_cgroup = "pod"
	I0318 13:45:55.055196 1102226 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0318 13:45:55.055203 1102226 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0318 13:45:55.055209 1102226 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:45:55.055215 1102226 command_runner.go:130] > conmon_env = [
	I0318 13:45:55.055221 1102226 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:45:55.055227 1102226 command_runner.go:130] > ]
	I0318 13:45:55.055232 1102226 command_runner.go:130] > # Additional environment variables to set for all the
	I0318 13:45:55.055240 1102226 command_runner.go:130] > # containers. These are overridden if set in the
	I0318 13:45:55.055253 1102226 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0318 13:45:55.055263 1102226 command_runner.go:130] > # default_env = [
	I0318 13:45:55.055269 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055282 1102226 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0318 13:45:55.055295 1102226 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0318 13:45:55.055301 1102226 command_runner.go:130] > # selinux = false
	I0318 13:45:55.055307 1102226 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0318 13:45:55.055322 1102226 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0318 13:45:55.055331 1102226 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0318 13:45:55.055338 1102226 command_runner.go:130] > # seccomp_profile = ""
	I0318 13:45:55.055346 1102226 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0318 13:45:55.055358 1102226 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0318 13:45:55.055372 1102226 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0318 13:45:55.055383 1102226 command_runner.go:130] > # which might increase security.
	I0318 13:45:55.055394 1102226 command_runner.go:130] > # This option is currently deprecated,
	I0318 13:45:55.055406 1102226 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0318 13:45:55.055413 1102226 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0318 13:45:55.055419 1102226 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0318 13:45:55.055427 1102226 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0318 13:45:55.055435 1102226 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0318 13:45:55.055445 1102226 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0318 13:45:55.055455 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.055467 1102226 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0318 13:45:55.055478 1102226 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0318 13:45:55.055488 1102226 command_runner.go:130] > # the cgroup blockio controller.
	I0318 13:45:55.055498 1102226 command_runner.go:130] > # blockio_config_file = ""
	I0318 13:45:55.055511 1102226 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0318 13:45:55.055520 1102226 command_runner.go:130] > # blockio parameters.
	I0318 13:45:55.055528 1102226 command_runner.go:130] > # blockio_reload = false
	I0318 13:45:55.055534 1102226 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0318 13:45:55.055540 1102226 command_runner.go:130] > # irqbalance daemon.
	I0318 13:45:55.055548 1102226 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0318 13:45:55.055562 1102226 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0318 13:45:55.055576 1102226 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0318 13:45:55.055590 1102226 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0318 13:45:55.055602 1102226 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0318 13:45:55.055615 1102226 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0318 13:45:55.055624 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.055633 1102226 command_runner.go:130] > # rdt_config_file = ""
	I0318 13:45:55.055646 1102226 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0318 13:45:55.055654 1102226 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0318 13:45:55.055692 1102226 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0318 13:45:55.055703 1102226 command_runner.go:130] > # separate_pull_cgroup = ""
	I0318 13:45:55.055719 1102226 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0318 13:45:55.055729 1102226 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0318 13:45:55.055738 1102226 command_runner.go:130] > # will be added.
	I0318 13:45:55.055748 1102226 command_runner.go:130] > # default_capabilities = [
	I0318 13:45:55.055757 1102226 command_runner.go:130] > # 	"CHOWN",
	I0318 13:45:55.055764 1102226 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0318 13:45:55.055773 1102226 command_runner.go:130] > # 	"FSETID",
	I0318 13:45:55.055783 1102226 command_runner.go:130] > # 	"FOWNER",
	I0318 13:45:55.055792 1102226 command_runner.go:130] > # 	"SETGID",
	I0318 13:45:55.055801 1102226 command_runner.go:130] > # 	"SETUID",
	I0318 13:45:55.055810 1102226 command_runner.go:130] > # 	"SETPCAP",
	I0318 13:45:55.055819 1102226 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0318 13:45:55.055836 1102226 command_runner.go:130] > # 	"KILL",
	I0318 13:45:55.055842 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055855 1102226 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0318 13:45:55.055870 1102226 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0318 13:45:55.055880 1102226 command_runner.go:130] > # add_inheritable_capabilities = false
	I0318 13:45:55.055893 1102226 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0318 13:45:55.055905 1102226 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:45:55.055914 1102226 command_runner.go:130] > # default_sysctls = [
	I0318 13:45:55.055921 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055926 1102226 command_runner.go:130] > # List of devices on the host that a
	I0318 13:45:55.055939 1102226 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0318 13:45:55.055950 1102226 command_runner.go:130] > # allowed_devices = [
	I0318 13:45:55.055957 1102226 command_runner.go:130] > # 	"/dev/fuse",
	I0318 13:45:55.055965 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055976 1102226 command_runner.go:130] > # List of additional devices. specified as
	I0318 13:45:55.055991 1102226 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0318 13:45:55.056001 1102226 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0318 13:45:55.056012 1102226 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:45:55.056020 1102226 command_runner.go:130] > # additional_devices = [
	I0318 13:45:55.056024 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.056032 1102226 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0318 13:45:55.056049 1102226 command_runner.go:130] > # cdi_spec_dirs = [
	I0318 13:45:55.056059 1102226 command_runner.go:130] > # 	"/etc/cdi",
	I0318 13:45:55.056069 1102226 command_runner.go:130] > # 	"/var/run/cdi",
	I0318 13:45:55.056085 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.056098 1102226 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0318 13:45:55.056108 1102226 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0318 13:45:55.056116 1102226 command_runner.go:130] > # Defaults to false.
	I0318 13:45:55.056127 1102226 command_runner.go:130] > # device_ownership_from_security_context = false
	I0318 13:45:55.056141 1102226 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0318 13:45:55.056158 1102226 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0318 13:45:55.056168 1102226 command_runner.go:130] > # hooks_dir = [
	I0318 13:45:55.056178 1102226 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0318 13:45:55.056186 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.056197 1102226 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0318 13:45:55.056218 1102226 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0318 13:45:55.056231 1102226 command_runner.go:130] > # its default mounts from the following two files:
	I0318 13:45:55.056240 1102226 command_runner.go:130] > #
	I0318 13:45:55.056253 1102226 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0318 13:45:55.056266 1102226 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0318 13:45:55.056279 1102226 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0318 13:45:55.056287 1102226 command_runner.go:130] > #
	I0318 13:45:55.056297 1102226 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0318 13:45:55.056310 1102226 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0318 13:45:55.056322 1102226 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0318 13:45:55.056330 1102226 command_runner.go:130] > #      only add mounts it finds in this file.
	I0318 13:45:55.056338 1102226 command_runner.go:130] > #
	I0318 13:45:55.056346 1102226 command_runner.go:130] > # default_mounts_file = ""
	I0318 13:45:55.056357 1102226 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0318 13:45:55.056371 1102226 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0318 13:45:55.056380 1102226 command_runner.go:130] > pids_limit = 1024
	I0318 13:45:55.056393 1102226 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0318 13:45:55.056407 1102226 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0318 13:45:55.056426 1102226 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0318 13:45:55.056442 1102226 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0318 13:45:55.056451 1102226 command_runner.go:130] > # log_size_max = -1
	I0318 13:45:55.056463 1102226 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0318 13:45:55.056473 1102226 command_runner.go:130] > # log_to_journald = false
	I0318 13:45:55.056486 1102226 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0318 13:45:55.056499 1102226 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0318 13:45:55.056516 1102226 command_runner.go:130] > # Path to directory for container attach sockets.
	I0318 13:45:55.056528 1102226 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0318 13:45:55.056539 1102226 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0318 13:45:55.056549 1102226 command_runner.go:130] > # bind_mount_prefix = ""
	I0318 13:45:55.056561 1102226 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0318 13:45:55.056571 1102226 command_runner.go:130] > # read_only = false
	I0318 13:45:55.056585 1102226 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0318 13:45:55.056598 1102226 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0318 13:45:55.056607 1102226 command_runner.go:130] > # live configuration reload.
	I0318 13:45:55.056617 1102226 command_runner.go:130] > # log_level = "info"
	I0318 13:45:55.056628 1102226 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0318 13:45:55.056636 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.056645 1102226 command_runner.go:130] > # log_filter = ""
	I0318 13:45:55.056655 1102226 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0318 13:45:55.056668 1102226 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0318 13:45:55.056678 1102226 command_runner.go:130] > # separated by comma.
	I0318 13:45:55.056694 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056703 1102226 command_runner.go:130] > # uid_mappings = ""
	I0318 13:45:55.056714 1102226 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0318 13:45:55.056724 1102226 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0318 13:45:55.056737 1102226 command_runner.go:130] > # separated by comma.
	I0318 13:45:55.056753 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056763 1102226 command_runner.go:130] > # gid_mappings = ""
	I0318 13:45:55.056776 1102226 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0318 13:45:55.056788 1102226 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:45:55.056800 1102226 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:45:55.056822 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056831 1102226 command_runner.go:130] > # minimum_mappable_uid = -1
	I0318 13:45:55.056845 1102226 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0318 13:45:55.056858 1102226 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:45:55.056870 1102226 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:45:55.056885 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056895 1102226 command_runner.go:130] > # minimum_mappable_gid = -1
	I0318 13:45:55.056906 1102226 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0318 13:45:55.056917 1102226 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0318 13:45:55.056930 1102226 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0318 13:45:55.056947 1102226 command_runner.go:130] > # ctr_stop_timeout = 30
	I0318 13:45:55.056961 1102226 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0318 13:45:55.056974 1102226 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0318 13:45:55.056984 1102226 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0318 13:45:55.056995 1102226 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0318 13:45:55.057004 1102226 command_runner.go:130] > drop_infra_ctr = false
	I0318 13:45:55.057014 1102226 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0318 13:45:55.057024 1102226 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0318 13:45:55.057039 1102226 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0318 13:45:55.057054 1102226 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0318 13:45:55.057065 1102226 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0318 13:45:55.057077 1102226 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0318 13:45:55.057090 1102226 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0318 13:45:55.057101 1102226 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0318 13:45:55.057111 1102226 command_runner.go:130] > # shared_cpuset = ""
	I0318 13:45:55.057122 1102226 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0318 13:45:55.057129 1102226 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0318 13:45:55.057136 1102226 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0318 13:45:55.057151 1102226 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0318 13:45:55.057161 1102226 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0318 13:45:55.057173 1102226 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0318 13:45:55.057185 1102226 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0318 13:45:55.057195 1102226 command_runner.go:130] > # enable_criu_support = false
	I0318 13:45:55.057206 1102226 command_runner.go:130] > # Enable/disable the generation of the container,
	I0318 13:45:55.057214 1102226 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0318 13:45:55.057223 1102226 command_runner.go:130] > # enable_pod_events = false
	I0318 13:45:55.057237 1102226 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:45:55.057251 1102226 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:45:55.057262 1102226 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0318 13:45:55.057272 1102226 command_runner.go:130] > # default_runtime = "runc"
	I0318 13:45:55.057283 1102226 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0318 13:45:55.057296 1102226 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0318 13:45:55.057310 1102226 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0318 13:45:55.057322 1102226 command_runner.go:130] > # creation as a file is not desired either.
	I0318 13:45:55.057337 1102226 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0318 13:45:55.057348 1102226 command_runner.go:130] > # the hostname is being managed dynamically.
	I0318 13:45:55.057366 1102226 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0318 13:45:55.057375 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.057385 1102226 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0318 13:45:55.057397 1102226 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0318 13:45:55.057409 1102226 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0318 13:45:55.057421 1102226 command_runner.go:130] > # Each entry in the table should follow the format:
	I0318 13:45:55.057430 1102226 command_runner.go:130] > #
	I0318 13:45:55.057440 1102226 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0318 13:45:55.057450 1102226 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0318 13:45:55.057460 1102226 command_runner.go:130] > # runtime_type = "oci"
	I0318 13:45:55.057545 1102226 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0318 13:45:55.057560 1102226 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0318 13:45:55.057564 1102226 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0318 13:45:55.057568 1102226 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0318 13:45:55.057575 1102226 command_runner.go:130] > # monitor_env = []
	I0318 13:45:55.057586 1102226 command_runner.go:130] > # privileged_without_host_devices = false
	I0318 13:45:55.057596 1102226 command_runner.go:130] > # allowed_annotations = []
	I0318 13:45:55.057608 1102226 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0318 13:45:55.057617 1102226 command_runner.go:130] > # Where:
	I0318 13:45:55.057629 1102226 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0318 13:45:55.057641 1102226 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0318 13:45:55.057651 1102226 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0318 13:45:55.057660 1102226 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0318 13:45:55.057665 1102226 command_runner.go:130] > #   in $PATH.
	I0318 13:45:55.057679 1102226 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0318 13:45:55.057690 1102226 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0318 13:45:55.057702 1102226 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0318 13:45:55.057711 1102226 command_runner.go:130] > #   state.
	I0318 13:45:55.057724 1102226 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0318 13:45:55.057743 1102226 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0318 13:45:55.057753 1102226 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0318 13:45:55.057765 1102226 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0318 13:45:55.057779 1102226 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0318 13:45:55.057793 1102226 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0318 13:45:55.057804 1102226 command_runner.go:130] > #   The currently recognized values are:
	I0318 13:45:55.057818 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0318 13:45:55.057838 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0318 13:45:55.057848 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0318 13:45:55.057861 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0318 13:45:55.057876 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0318 13:45:55.057890 1102226 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0318 13:45:55.057904 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0318 13:45:55.057917 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0318 13:45:55.057929 1102226 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0318 13:45:55.057938 1102226 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0318 13:45:55.057947 1102226 command_runner.go:130] > #   deprecated option "conmon".
	I0318 13:45:55.057963 1102226 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0318 13:45:55.057974 1102226 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0318 13:45:55.057988 1102226 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0318 13:45:55.057999 1102226 command_runner.go:130] > #   should be moved to the container's cgroup
	I0318 13:45:55.058013 1102226 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0318 13:45:55.058021 1102226 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0318 13:45:55.058032 1102226 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0318 13:45:55.058049 1102226 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0318 13:45:55.058058 1102226 command_runner.go:130] > #
	I0318 13:45:55.058066 1102226 command_runner.go:130] > # Using the seccomp notifier feature:
	I0318 13:45:55.058075 1102226 command_runner.go:130] > #
	I0318 13:45:55.058091 1102226 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0318 13:45:55.058104 1102226 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0318 13:45:55.058112 1102226 command_runner.go:130] > #
	I0318 13:45:55.058125 1102226 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0318 13:45:55.058134 1102226 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0318 13:45:55.058142 1102226 command_runner.go:130] > #
	I0318 13:45:55.058156 1102226 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0318 13:45:55.058165 1102226 command_runner.go:130] > # feature.
	I0318 13:45:55.058173 1102226 command_runner.go:130] > #
	I0318 13:45:55.058182 1102226 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0318 13:45:55.058195 1102226 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0318 13:45:55.058208 1102226 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0318 13:45:55.058217 1102226 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0318 13:45:55.058229 1102226 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0318 13:45:55.058238 1102226 command_runner.go:130] > #
	I0318 13:45:55.058258 1102226 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0318 13:45:55.058271 1102226 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0318 13:45:55.058279 1102226 command_runner.go:130] > #
	I0318 13:45:55.058289 1102226 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0318 13:45:55.058300 1102226 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0318 13:45:55.058306 1102226 command_runner.go:130] > #
	I0318 13:45:55.058315 1102226 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0318 13:45:55.058327 1102226 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0318 13:45:55.058336 1102226 command_runner.go:130] > # limitation.
	I0318 13:45:55.058346 1102226 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0318 13:45:55.058357 1102226 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0318 13:45:55.058366 1102226 command_runner.go:130] > runtime_type = "oci"
	I0318 13:45:55.058376 1102226 command_runner.go:130] > runtime_root = "/run/runc"
	I0318 13:45:55.058386 1102226 command_runner.go:130] > runtime_config_path = ""
	I0318 13:45:55.058394 1102226 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0318 13:45:55.058399 1102226 command_runner.go:130] > monitor_cgroup = "pod"
	I0318 13:45:55.058409 1102226 command_runner.go:130] > monitor_exec_cgroup = ""
	I0318 13:45:55.058418 1102226 command_runner.go:130] > monitor_env = [
	I0318 13:45:55.058428 1102226 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:45:55.058436 1102226 command_runner.go:130] > ]
	I0318 13:45:55.058447 1102226 command_runner.go:130] > privileged_without_host_devices = false
	I0318 13:45:55.058465 1102226 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0318 13:45:55.058476 1102226 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0318 13:45:55.058487 1102226 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0318 13:45:55.058498 1102226 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0318 13:45:55.058514 1102226 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0318 13:45:55.058528 1102226 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0318 13:45:55.058547 1102226 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0318 13:45:55.058563 1102226 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0318 13:45:55.058575 1102226 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0318 13:45:55.058588 1102226 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0318 13:45:55.058595 1102226 command_runner.go:130] > # Example:
	I0318 13:45:55.058600 1102226 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0318 13:45:55.058610 1102226 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0318 13:45:55.058622 1102226 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0318 13:45:55.058634 1102226 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0318 13:45:55.058649 1102226 command_runner.go:130] > # cpuset = 0
	I0318 13:45:55.058655 1102226 command_runner.go:130] > # cpushares = "0-1"
	I0318 13:45:55.058661 1102226 command_runner.go:130] > # Where:
	I0318 13:45:55.058668 1102226 command_runner.go:130] > # The workload name is workload-type.
	I0318 13:45:55.058679 1102226 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0318 13:45:55.058687 1102226 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0318 13:45:55.058692 1102226 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0318 13:45:55.058701 1102226 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0318 13:45:55.058710 1102226 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0318 13:45:55.058719 1102226 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0318 13:45:55.058729 1102226 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0318 13:45:55.058736 1102226 command_runner.go:130] > # Default value is set to true
	I0318 13:45:55.058743 1102226 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0318 13:45:55.058751 1102226 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0318 13:45:55.058759 1102226 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0318 13:45:55.058766 1102226 command_runner.go:130] > # Default value is set to 'false'
	I0318 13:45:55.058772 1102226 command_runner.go:130] > # disable_hostport_mapping = false
	I0318 13:45:55.058780 1102226 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0318 13:45:55.058783 1102226 command_runner.go:130] > #
	I0318 13:45:55.058791 1102226 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0318 13:45:55.058800 1102226 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0318 13:45:55.058810 1102226 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0318 13:45:55.058820 1102226 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0318 13:45:55.058830 1102226 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0318 13:45:55.058835 1102226 command_runner.go:130] > [crio.image]
	I0318 13:45:55.058844 1102226 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0318 13:45:55.058854 1102226 command_runner.go:130] > # default_transport = "docker://"
	I0318 13:45:55.058866 1102226 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0318 13:45:55.058875 1102226 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:45:55.058886 1102226 command_runner.go:130] > # global_auth_file = ""
	I0318 13:45:55.058898 1102226 command_runner.go:130] > # The image used to instantiate infra containers.
	I0318 13:45:55.058910 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.058926 1102226 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0318 13:45:55.058940 1102226 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0318 13:45:55.058953 1102226 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:45:55.058963 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.058976 1102226 command_runner.go:130] > # pause_image_auth_file = ""
	I0318 13:45:55.058989 1102226 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0318 13:45:55.059002 1102226 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0318 13:45:55.059015 1102226 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0318 13:45:55.059028 1102226 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0318 13:45:55.059038 1102226 command_runner.go:130] > # pause_command = "/pause"
	I0318 13:45:55.059054 1102226 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0318 13:45:55.059066 1102226 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0318 13:45:55.059074 1102226 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0318 13:45:55.059086 1102226 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0318 13:45:55.059099 1102226 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0318 13:45:55.059121 1102226 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0318 13:45:55.059131 1102226 command_runner.go:130] > # pinned_images = [
	I0318 13:45:55.059140 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059152 1102226 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0318 13:45:55.059164 1102226 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0318 13:45:55.059174 1102226 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0318 13:45:55.059186 1102226 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0318 13:45:55.059198 1102226 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0318 13:45:55.059208 1102226 command_runner.go:130] > # signature_policy = ""
	I0318 13:45:55.059220 1102226 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0318 13:45:55.059233 1102226 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0318 13:45:55.059246 1102226 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0318 13:45:55.059258 1102226 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0318 13:45:55.059268 1102226 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0318 13:45:55.059277 1102226 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0318 13:45:55.059289 1102226 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0318 13:45:55.059303 1102226 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0318 13:45:55.059313 1102226 command_runner.go:130] > # changing them here.
	I0318 13:45:55.059323 1102226 command_runner.go:130] > # insecure_registries = [
	I0318 13:45:55.059330 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059341 1102226 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0318 13:45:55.059352 1102226 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0318 13:45:55.059361 1102226 command_runner.go:130] > # image_volumes = "mkdir"
	I0318 13:45:55.059370 1102226 command_runner.go:130] > # Temporary directory to use for storing big files
	I0318 13:45:55.059375 1102226 command_runner.go:130] > # big_files_temporary_dir = ""
	I0318 13:45:55.059395 1102226 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0318 13:45:55.059405 1102226 command_runner.go:130] > # CNI plugins.
	I0318 13:45:55.059410 1102226 command_runner.go:130] > [crio.network]
	I0318 13:45:55.059423 1102226 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0318 13:45:55.059435 1102226 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0318 13:45:55.059445 1102226 command_runner.go:130] > # cni_default_network = ""
	I0318 13:45:55.059457 1102226 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0318 13:45:55.059468 1102226 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0318 13:45:55.059477 1102226 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0318 13:45:55.059484 1102226 command_runner.go:130] > # plugin_dirs = [
	I0318 13:45:55.059494 1102226 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0318 13:45:55.059503 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059512 1102226 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0318 13:45:55.059521 1102226 command_runner.go:130] > [crio.metrics]
	I0318 13:45:55.059532 1102226 command_runner.go:130] > # Globally enable or disable metrics support.
	I0318 13:45:55.059541 1102226 command_runner.go:130] > enable_metrics = true
	I0318 13:45:55.059551 1102226 command_runner.go:130] > # Specify enabled metrics collectors.
	I0318 13:45:55.059562 1102226 command_runner.go:130] > # Per default all metrics are enabled.
	I0318 13:45:55.059576 1102226 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0318 13:45:55.059588 1102226 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0318 13:45:55.059601 1102226 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0318 13:45:55.059611 1102226 command_runner.go:130] > # metrics_collectors = [
	I0318 13:45:55.059621 1102226 command_runner.go:130] > # 	"operations",
	I0318 13:45:55.059631 1102226 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0318 13:45:55.059641 1102226 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0318 13:45:55.059651 1102226 command_runner.go:130] > # 	"operations_errors",
	I0318 13:45:55.059657 1102226 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0318 13:45:55.059664 1102226 command_runner.go:130] > # 	"image_pulls_by_name",
	I0318 13:45:55.059670 1102226 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0318 13:45:55.059679 1102226 command_runner.go:130] > # 	"image_pulls_failures",
	I0318 13:45:55.059690 1102226 command_runner.go:130] > # 	"image_pulls_successes",
	I0318 13:45:55.059697 1102226 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0318 13:45:55.059707 1102226 command_runner.go:130] > # 	"image_layer_reuse",
	I0318 13:45:55.059718 1102226 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0318 13:45:55.059728 1102226 command_runner.go:130] > # 	"containers_oom_total",
	I0318 13:45:55.059737 1102226 command_runner.go:130] > # 	"containers_oom",
	I0318 13:45:55.059752 1102226 command_runner.go:130] > # 	"processes_defunct",
	I0318 13:45:55.059760 1102226 command_runner.go:130] > # 	"operations_total",
	I0318 13:45:55.059765 1102226 command_runner.go:130] > # 	"operations_latency_seconds",
	I0318 13:45:55.059775 1102226 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0318 13:45:55.059785 1102226 command_runner.go:130] > # 	"operations_errors_total",
	I0318 13:45:55.059796 1102226 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0318 13:45:55.059806 1102226 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0318 13:45:55.059816 1102226 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0318 13:45:55.059838 1102226 command_runner.go:130] > # 	"image_pulls_success_total",
	I0318 13:45:55.059849 1102226 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0318 13:45:55.059857 1102226 command_runner.go:130] > # 	"containers_oom_count_total",
	I0318 13:45:55.059868 1102226 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0318 13:45:55.059879 1102226 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0318 13:45:55.059887 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059899 1102226 command_runner.go:130] > # The port on which the metrics server will listen.
	I0318 13:45:55.059909 1102226 command_runner.go:130] > # metrics_port = 9090
	I0318 13:45:55.059919 1102226 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0318 13:45:55.059926 1102226 command_runner.go:130] > # metrics_socket = ""
	I0318 13:45:55.059934 1102226 command_runner.go:130] > # The certificate for the secure metrics server.
	I0318 13:45:55.059947 1102226 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0318 13:45:55.059960 1102226 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0318 13:45:55.059970 1102226 command_runner.go:130] > # certificate on any modification event.
	I0318 13:45:55.059980 1102226 command_runner.go:130] > # metrics_cert = ""
	I0318 13:45:55.059992 1102226 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0318 13:45:55.060003 1102226 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0318 13:45:55.060013 1102226 command_runner.go:130] > # metrics_key = ""
	I0318 13:45:55.060024 1102226 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0318 13:45:55.060030 1102226 command_runner.go:130] > [crio.tracing]
	I0318 13:45:55.060038 1102226 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0318 13:45:55.060052 1102226 command_runner.go:130] > # enable_tracing = false
	I0318 13:45:55.060064 1102226 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0318 13:45:55.060074 1102226 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0318 13:45:55.060087 1102226 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0318 13:45:55.060098 1102226 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0318 13:45:55.060108 1102226 command_runner.go:130] > # CRI-O NRI configuration.
	I0318 13:45:55.060115 1102226 command_runner.go:130] > [crio.nri]
	I0318 13:45:55.060126 1102226 command_runner.go:130] > # Globally enable or disable NRI.
	I0318 13:45:55.060136 1102226 command_runner.go:130] > # enable_nri = false
	I0318 13:45:55.060146 1102226 command_runner.go:130] > # NRI socket to listen on.
	I0318 13:45:55.060154 1102226 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0318 13:45:55.060164 1102226 command_runner.go:130] > # NRI plugin directory to use.
	I0318 13:45:55.060174 1102226 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0318 13:45:55.060185 1102226 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0318 13:45:55.060195 1102226 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0318 13:45:55.060207 1102226 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0318 13:45:55.060215 1102226 command_runner.go:130] > # nri_disable_connections = false
	I0318 13:45:55.060224 1102226 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0318 13:45:55.060233 1102226 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0318 13:45:55.060245 1102226 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0318 13:45:55.060256 1102226 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0318 13:45:55.060269 1102226 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0318 13:45:55.060277 1102226 command_runner.go:130] > [crio.stats]
	I0318 13:45:55.060289 1102226 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0318 13:45:55.060301 1102226 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0318 13:45:55.060310 1102226 command_runner.go:130] > # stats_collection_period = 0
	I0318 13:45:55.060358 1102226 command_runner.go:130] ! time="2024-03-18 13:45:55.015802229Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0318 13:45:55.060385 1102226 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0318 13:45:55.060621 1102226 cni.go:84] Creating CNI manager for ""
	I0318 13:45:55.060638 1102226 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:45:55.060650 1102226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:45:55.060691 1102226 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-994669 NodeName:multinode-994669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:45:55.060881 1102226 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-994669"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:45:55.060967 1102226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:45:55.073277 1102226 command_runner.go:130] > kubeadm
	I0318 13:45:55.073293 1102226 command_runner.go:130] > kubectl
	I0318 13:45:55.073297 1102226 command_runner.go:130] > kubelet
	I0318 13:45:55.073534 1102226 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:45:55.073587 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:45:55.085522 1102226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0318 13:45:55.104791 1102226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:45:55.124901 1102226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0318 13:45:55.145130 1102226 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0318 13:45:55.149564 1102226 command_runner.go:130] > 192.168.39.57	control-plane.minikube.internal
	I0318 13:45:55.149638 1102226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:55.310684 1102226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:45:55.327467 1102226 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669 for IP: 192.168.39.57
	I0318 13:45:55.327503 1102226 certs.go:194] generating shared ca certs ...
	I0318 13:45:55.327523 1102226 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:45:55.327753 1102226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:45:55.327837 1102226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:45:55.327854 1102226 certs.go:256] generating profile certs ...
	I0318 13:45:55.327968 1102226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/client.key
	I0318 13:45:55.328059 1102226 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.key.de4d6102
	I0318 13:45:55.328116 1102226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.key
	I0318 13:45:55.328132 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:45:55.328150 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:45:55.328167 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:45:55.328188 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:45:55.328203 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:45:55.328221 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:45:55.328239 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:45:55.328261 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:45:55.328347 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:45:55.328391 1102226 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:45:55.328404 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:45:55.328434 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:45:55.328470 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:45:55.328502 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:45:55.328556 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:45:55.328598 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.328617 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.328635 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.329364 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:45:55.354116 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:45:55.379063 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:45:55.403409 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:45:55.427939 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:45:55.452186 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:45:55.478163 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:45:55.505532 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:45:55.532977 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:45:55.559046 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:45:55.584703 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:45:55.610595 1102226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:45:55.629599 1102226 ssh_runner.go:195] Run: openssl version
	I0318 13:45:55.636543 1102226 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:45:55.636692 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:45:55.649808 1102226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.655139 1102226 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.655288 1102226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.655333 1102226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.661103 1102226 command_runner.go:130] > b5213941
	I0318 13:45:55.661328 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:45:55.671900 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:45:55.683965 1102226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.688768 1102226 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.688795 1102226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.688842 1102226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.694518 1102226 command_runner.go:130] > 51391683
	I0318 13:45:55.694674 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:45:55.704739 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:45:55.716231 1102226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.720627 1102226 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.720835 1102226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.720883 1102226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.726691 1102226 command_runner.go:130] > 3ec20f2e
	I0318 13:45:55.726736 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:45:55.736679 1102226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:45:55.741035 1102226 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:45:55.741055 1102226 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 13:45:55.741061 1102226 command_runner.go:130] > Device: 253,1	Inode: 8385597     Links: 1
	I0318 13:45:55.741067 1102226 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:45:55.741076 1102226 command_runner.go:130] > Access: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741081 1102226 command_runner.go:130] > Modify: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741092 1102226 command_runner.go:130] > Change: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741098 1102226 command_runner.go:130] >  Birth: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741234 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:45:55.746640 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.746818 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:45:55.752182 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.752360 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:45:55.758028 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.758100 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:45:55.763617 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.763687 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:45:55.769359 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.769420 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:45:55.774888 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.774962 1102226 kubeadm.go:391] StartCluster: {Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:55.775126 1102226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:45:55.775179 1102226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:45:55.810558 1102226 command_runner.go:130] > 6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408
	I0318 13:45:55.810591 1102226 command_runner.go:130] > eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb
	I0318 13:45:55.810601 1102226 command_runner.go:130] > 09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707
	I0318 13:45:55.810622 1102226 command_runner.go:130] > 7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e
	I0318 13:45:55.810642 1102226 command_runner.go:130] > 188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558
	I0318 13:45:55.810651 1102226 command_runner.go:130] > b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b
	I0318 13:45:55.810660 1102226 command_runner.go:130] > bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6
	I0318 13:45:55.810679 1102226 command_runner.go:130] > e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d
	I0318 13:45:55.812089 1102226 cri.go:89] found id: "6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408"
	I0318 13:45:55.812121 1102226 cri.go:89] found id: "eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb"
	I0318 13:45:55.812127 1102226 cri.go:89] found id: "09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707"
	I0318 13:45:55.812132 1102226 cri.go:89] found id: "7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e"
	I0318 13:45:55.812136 1102226 cri.go:89] found id: "188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558"
	I0318 13:45:55.812140 1102226 cri.go:89] found id: "b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b"
	I0318 13:45:55.812144 1102226 cri.go:89] found id: "bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6"
	I0318 13:45:55.812150 1102226 cri.go:89] found id: "e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d"
	I0318 13:45:55.812154 1102226 cri.go:89] found id: ""
	I0318 13:45:55.812217 1102226 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.460123817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=449936a4-b29f-4b2d-a0bb-9536e3f2202b name=/runtime.v1.RuntimeService/Version
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.461870149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f28857e-b051-45e1-b2b7-dfd4d1e57179 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.462574016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769644462546138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f28857e-b051-45e1-b2b7-dfd4d1e57179 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.463175880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbb3437e-a537-497d-963f-f0afd4450079 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.463364017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbb3437e-a537-497d-963f-f0afd4450079 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.463846075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbb3437e-a537-497d-963f-f0afd4450079 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.506339826Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c0136d0a-09db-4047-82f6-1a506a794cff name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.506713892Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-4nbjw,Uid:2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769596630651471,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:46:02.493161301Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-pmwvq,Uid:94537a54-a7ff-4e1f-bf71-43d66bc78138,Namespace:kube-system,Attempt:
1,},State:SANDBOX_READY,CreatedAt:1710769562915020411,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:46:02.493151092Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:48b3e25f-c978-46aa-b8d5-d40371519a5e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769562880726336,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]s
tring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:46:02.493160264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&PodSandboxMetadata{Name:kube-proxy-f9tgg,Uid:d46ff588-70f2-4b72-8951-c1d1518d7bd0,Namespace:kube-system,A
ttempt:1,},State:SANDBOX_READY,CreatedAt:1710769562879677277,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:46:02.493158068Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&PodSandboxMetadata{Name:kindnet-m8hth,Uid:9b18f931-1481-4999-9ff1-89fc4a11f2ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769562819098204,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,k8s-app: kindnet,pod-template-gener
ation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:46:02.493156737Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-994669,Uid:d8d8d68bba4c5da05ae4e5388cfe771f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769558182520247,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d8d8d68bba4c5da05ae4e5388cfe771f,kubernetes.io/config.seen: 2024-03-18T13:45:57.489538288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&PodSandboxMetad
ata{Name:kube-scheduler-multinode-994669,Uid:d6e47bdd6c27ccb21f6946ff8943791b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769558165747871,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6e47bdd6c27ccb21f6946ff8943791b,kubernetes.io/config.seen: 2024-03-18T13:45:57.489539029Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-994669,Uid:87d45fc5ff8300974beb759dc4755c67,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769558162530790,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode
-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.57:8443,kubernetes.io/config.hash: 87d45fc5ff8300974beb759dc4755c67,kubernetes.io/config.seen: 2024-03-18T13:45:57.489537145Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&PodSandboxMetadata{Name:etcd-multinode-994669,Uid:a947eabccfd6fe8f857f455d2bd38fd0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769558161431693,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.57:2379,kuberne
tes.io/config.hash: a947eabccfd6fe8f857f455d2bd38fd0,kubernetes.io/config.seen: 2024-03-18T13:45:57.489533541Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-4nbjw,Uid:2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769246139278425,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:40:45.829740544Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-pmwvq,Uid:94537a54-a7ff-4e1f-bf71-43d66bc78138,Namespace:kube-system,Att
empt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769203000840498,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:40:02.675672790Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:48b3e25f-c978-46aa-b8d5-d40371519a5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769202971639216,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:m
ap[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:40:02.665411639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&PodSandboxMetadata{Name:kindnet-m8hth,Uid:9b18f931-1481-4999-9ff1-89fc4a11f2ec,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769198060406191,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:39:56.850856126Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&PodSandboxMetadata{Name:kube-proxy-f9tgg,Uid:d46ff588-70f2-4b72-8951-c1d1518d7bd0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769198026600524,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,k8s-app: k
ube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:39:56.814489382Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-994669,Uid:d8d8d68bba4c5da05ae4e5388cfe771f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769177691024728,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d8d8d68bba4c5da05ae4e5388cfe771f,kubernetes.io/config.seen: 2024-03-18T13:39:37.205068189Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metad
ata:&PodSandboxMetadata{Name:kube-scheduler-multinode-994669,Uid:d6e47bdd6c27ccb21f6946ff8943791b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769177690416765,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6e47bdd6c27ccb21f6946ff8943791b,kubernetes.io/config.seen: 2024-03-18T13:39:37.205068907Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&PodSandboxMetadata{Name:etcd-multinode-994669,Uid:a947eabccfd6fe8f857f455d2bd38fd0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769177687720330,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-994
669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.57:2379,kubernetes.io/config.hash: a947eabccfd6fe8f857f455d2bd38fd0,kubernetes.io/config.seen: 2024-03-18T13:39:37.205063089Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-994669,Uid:87d45fc5ff8300974beb759dc4755c67,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769177662941991,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoin
t: 192.168.39.57:8443,kubernetes.io/config.hash: 87d45fc5ff8300974beb759dc4755c67,kubernetes.io/config.seen: 2024-03-18T13:39:37.205067032Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c0136d0a-09db-4047-82f6-1a506a794cff name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.507698615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=515cb7d6-02b6-4d56-8801-813f6a46fb4e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.507756265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=515cb7d6-02b6-4d56-8801-813f6a46fb4e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.508108098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=515cb7d6-02b6-4d56-8801-813f6a46fb4e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.512463458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9035c50e-1c08-4453-90cd-fff1769575d4 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.512523036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9035c50e-1c08-4453-90cd-fff1769575d4 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.513677322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30c9d5df-d79d-4412-84d7-c7ddc5d1155e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.514082339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769644514055430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30c9d5df-d79d-4412-84d7-c7ddc5d1155e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.514687896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6f59878-4cbe-4890-be74-2224fd2ec780 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.514742224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6f59878-4cbe-4890-be74-2224fd2ec780 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.515076474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6f59878-4cbe-4890-be74-2224fd2ec780 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.559663681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41596028-1cb5-4839-b7f7-bf1b6bf12614 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.559734233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41596028-1cb5-4839-b7f7-bf1b6bf12614 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.560681704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62b6abe9-81e2-4da0-bedc-eafdf6d9c649 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.561084786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769644561058310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62b6abe9-81e2-4da0-bedc-eafdf6d9c649 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.561701729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cbe743b-8584-4593-803a-692b3cee4642 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.561781622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cbe743b-8584-4593-803a-692b3cee4642 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:47:24 multinode-994669 crio[2839]: time="2024-03-18 13:47:24.562123787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cbe743b-8584-4593-803a-692b3cee4642 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ab6fb9f215547       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   4d786ff152ee9       busybox-5b5d89c9d6-4nbjw
	e90e247cd8d81       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   8b7f48024f09f       kindnet-m8hth
	88e4c078a3782       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   c1e7ddd456a9d       coredns-5dd5756b68-pmwvq
	eec85d9c33948       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   f85e38707a495       kube-proxy-f9tgg
	d9bc8f0297b5c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   8fa25b4cfb488       storage-provisioner
	75c75f022d2d7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   cae37a16cc40c       kube-controller-manager-multinode-994669
	8cf3cc2ad6df4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   06dfaaab00697       etcd-multinode-994669
	be6e039078050       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   d8d812127351c       kube-scheduler-multinode-994669
	3cd5ec43f35fe       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   cd5fc15879c5c       kube-apiserver-multinode-994669
	1e31864a374a7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   15ba741fc0318       busybox-5b5d89c9d6-4nbjw
	6d25b416eebed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   fbf2592d41d34       coredns-5dd5756b68-pmwvq
	eaeb32898d7b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   5f2cd94162ae3       storage-provisioner
	09589b564e838       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   0139aab85c748       kindnet-m8hth
	7affd38bc5a22       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   1f23428e45300       kube-proxy-f9tgg
	188be02cea85b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   44104ccc2e985       kube-scheduler-multinode-994669
	b957b85972f36       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   8147e55b30143       kube-controller-manager-multinode-994669
	bcc52f68fa634       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   503d0d94bd1d9       etcd-multinode-994669
	e04f6e0a268ab       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   2623dcbffb530       kube-apiserver-multinode-994669
	
	
	==> coredns [6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408] <==
	[INFO] 10.244.1.2:57330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001965197s
	[INFO] 10.244.1.2:39970 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103391s
	[INFO] 10.244.1.2:57592 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087847s
	[INFO] 10.244.1.2:33063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002370487s
	[INFO] 10.244.1.2:52477 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077909s
	[INFO] 10.244.1.2:52575 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097759s
	[INFO] 10.244.1.2:35955 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124998s
	[INFO] 10.244.0.3:58000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168623s
	[INFO] 10.244.0.3:59283 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102799s
	[INFO] 10.244.0.3:46987 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050733s
	[INFO] 10.244.0.3:48885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059192s
	[INFO] 10.244.1.2:54373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001524s
	[INFO] 10.244.1.2:55816 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109549s
	[INFO] 10.244.1.2:59478 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135038s
	[INFO] 10.244.1.2:39606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009545s
	[INFO] 10.244.0.3:46814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182754s
	[INFO] 10.244.0.3:33907 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196752s
	[INFO] 10.244.0.3:59330 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102667s
	[INFO] 10.244.0.3:45408 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000149185s
	[INFO] 10.244.1.2:60952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153722s
	[INFO] 10.244.1.2:47659 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197442s
	[INFO] 10.244.1.2:40958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086161s
	[INFO] 10.244.1.2:46663 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000242883s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46426 - 6188 "HINFO IN 2855692911652182914.5550051341023083747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02070556s
	
	
	==> describe nodes <==
	Name:               multinode-994669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-994669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=multinode-994669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_39_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:39:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994669
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:39:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:39:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:39:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    multinode-994669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fb855a985a54fadb8eaa3b3c7fe3c0e
	  System UUID:                1fb855a9-85a5-4fad-b8ea-a3b3c7fe3c0e
	  Boot ID:                    32998e24-00c7-44a4-a7bd-183e2c2fc329
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4nbjw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 coredns-5dd5756b68-pmwvq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m28s
	  kube-system                 etcd-multinode-994669                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m43s
	  kube-system                 kindnet-m8hth                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m28s
	  kube-system                 kube-apiserver-multinode-994669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-controller-manager-multinode-994669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-proxy-f9tgg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-scheduler-multinode-994669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m26s              kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 7m41s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m41s              kubelet          Node multinode-994669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s              kubelet          Node multinode-994669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s              kubelet          Node multinode-994669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s              kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m28s              node-controller  Node multinode-994669 event: Registered Node multinode-994669 in Controller
	  Normal  NodeReady                7m22s              kubelet          Node multinode-994669 status is now: NodeReady
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node multinode-994669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node multinode-994669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)  kubelet          Node multinode-994669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                node-controller  Node multinode-994669 event: Registered Node multinode-994669 in Controller
	
	
	Name:               multinode-994669-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-994669-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=multinode-994669
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_46_45_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:46:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994669-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:47:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:46:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:46:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:46:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    multinode-994669-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 24131595995f41d78663efe9f2f8d32a
	  System UUID:                24131595-995f-41d7-8663-efe9f2f8d32a
	  Boot ID:                    456a4975-371e-4640-a34e-bca32d17d85a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ngqq9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-zhkmw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m49s
	  kube-system                 kube-proxy-pxm42            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m45s                  kube-proxy       
	  Normal  Starting                 36s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m50s (x5 over 6m51s)  kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x5 over 6m51s)  kubelet          Node multinode-994669-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s (x5 over 6m51s)  kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m41s                  kubelet          Node multinode-994669-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  39s (x5 over 41s)      kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x5 over 41s)      kubelet          Node multinode-994669-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x5 over 41s)      kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                    node-controller  Node multinode-994669-m02 event: Registered Node multinode-994669-m02 in Controller
	  Normal  NodeReady                32s                    kubelet          Node multinode-994669-m02 status is now: NodeReady
	
	
	Name:               multinode-994669-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-994669-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=multinode-994669
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_47_15_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:47:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994669-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:47:21 +0000   Mon, 18 Mar 2024 13:47:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:47:21 +0000   Mon, 18 Mar 2024 13:47:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:47:21 +0000   Mon, 18 Mar 2024 13:47:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:47:21 +0000   Mon, 18 Mar 2024 13:47:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    multinode-994669-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 dba246d09dc04d599fdcff9968230020
	  System UUID:                dba246d0-9dc0-4d59-9fdc-ff9968230020
	  Boot ID:                    8bd457b3-81d8-4e0c-a53b-774572e37f62
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6k8dh       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-proxy-ff8vd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m16s                  kube-proxy  
	  Normal  Starting                 5m56s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m1s (x5 over 6m3s)    kubelet     Node multinode-994669-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x5 over 6m3s)    kubelet     Node multinode-994669-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m1s (x5 over 6m3s)    kubelet     Node multinode-994669-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m52s                  kubelet     Node multinode-994669-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m18s (x5 over 5m20s)  kubelet     Node multinode-994669-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x5 over 5m20s)  kubelet     Node multinode-994669-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m18s (x5 over 5m20s)  kubelet     Node multinode-994669-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m11s                  kubelet     Node multinode-994669-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  10s (x5 over 12s)      kubelet     Node multinode-994669-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x5 over 12s)      kubelet     Node multinode-994669-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x5 over 12s)      kubelet     Node multinode-994669-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-994669-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.171824] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.155479] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.230989] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.756949] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.057232] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.607094] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.406972] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.362286] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.086455] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.316199] systemd-fstab-generator[1459]: Ignoring "noauto" option for root device
	[  +0.088333] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 13:40] kauditd_printk_skb: 56 callbacks suppressed
	[ +44.142660] kauditd_printk_skb: 18 callbacks suppressed
	[Mar18 13:45] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +0.157015] systemd-fstab-generator[2775]: Ignoring "noauto" option for root device
	[  +0.169579] systemd-fstab-generator[2789]: Ignoring "noauto" option for root device
	[  +0.144400] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.258795] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +3.125567] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[  +2.048398] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[  +0.080694] kauditd_printk_skb: 122 callbacks suppressed
	[Mar18 13:46] kauditd_printk_skb: 52 callbacks suppressed
	[ +12.115498] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.786216] systemd-fstab-generator[3868]: Ignoring "noauto" option for root device
	[ +18.854840] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5] <==
	{"level":"info","ts":"2024-03-18T13:45:58.854995Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:45:58.855022Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:45:58.855421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d switched to configuration voters=(8786012295892039485)"}
	{"level":"info","ts":"2024-03-18T13:45:58.85552Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","added-peer-id":"79ee2fa200dbf73d","added-peer-peer-urls":["https://192.168.39.57:2380"]}
	{"level":"info","ts":"2024-03-18T13:45:58.855655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:45:58.8557Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:45:58.868895Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:45:58.871321Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"79ee2fa200dbf73d","initial-advertise-peer-urls":["https://192.168.39.57:2380"],"listen-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.57:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:45:58.87141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:45:58.871606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:45:58.871637Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:46:00.626265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:46:00.626385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:46:00.626462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgPreVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-03-18T13:46:00.626497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.626522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.626548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.626574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.633104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:46:00.633049Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:multinode-994669 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:46:00.634641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:46:00.634876Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-03-18T13:46:00.635866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:46:00.635978Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:46:00.639134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6] <==
	{"level":"info","ts":"2024-03-18T13:39:38.598398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-03-18T13:39:38.598406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 2"}
	{"level":"info","ts":"2024-03-18T13:39:38.598413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-03-18T13:39:38.604401Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.606513Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:multinode-994669 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:39:38.606785Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:39:38.627603Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:39:38.631293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:39:38.631332Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.6384Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.638458Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.651353Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:39:38.651474Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:39:38.654962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-03-18T13:40:37.633285Z","caller":"traceutil/trace.go:171","msg":"trace[438097044] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"108.592372ms","start":"2024-03-18T13:40:37.524587Z","end":"2024-03-18T13:40:37.633179Z","steps":["trace[438097044] 'process raft request'  (duration: 108.095316ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:44:20.110281Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T13:44:20.110507Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-994669","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	{"level":"warn","ts":"2024-03-18T13:44:20.11071Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:44:20.110844Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:44:20.146512Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:44:20.146569Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:44:20.148058Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"79ee2fa200dbf73d"}
	{"level":"info","ts":"2024-03-18T13:44:20.151423Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:44:20.151582Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:44:20.15162Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-994669","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> kernel <==
	 13:47:25 up 8 min,  0 users,  load average: 0.21, 0.17, 0.09
	Linux multinode-994669 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707] <==
	I0318 13:43:32.538585       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:43:42.551002       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:43:42.551115       1 main.go:227] handling current node
	I0318 13:43:42.551149       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:43:42.551167       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:43:42.551367       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:43:42.551402       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:43:52.556263       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:43:52.556498       1 main.go:227] handling current node
	I0318 13:43:52.556539       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:43:52.556560       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:43:52.556725       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:43:52.556746       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:44:02.569989       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:44:02.570165       1 main.go:227] handling current node
	I0318 13:44:02.570198       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:44:02.570328       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:44:02.570529       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:44:02.570582       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:44:12.582561       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:44:12.582894       1 main.go:227] handling current node
	I0318 13:44:12.582960       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:44:12.582992       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:44:12.583274       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:44:12.583325       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01] <==
	I0318 13:46:44.318536       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:46:54.325561       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:46:54.325694       1 main.go:227] handling current node
	I0318 13:46:54.325730       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:46:54.325751       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:46:54.325880       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:46:54.325901       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:47:04.332775       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:47:04.332849       1 main.go:227] handling current node
	I0318 13:47:04.332874       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:47:04.332886       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:47:04.333066       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:47:04.333108       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:47:14.349490       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:47:14.349530       1 main.go:227] handling current node
	I0318 13:47:14.349546       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:47:14.349558       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:47:14.349682       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:47:24.363458       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:47:24.363502       1 main.go:227] handling current node
	I0318 13:47:24.363516       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:47:24.363533       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:47:24.363651       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:47:24.363656       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.2.0/24] 
	Node multinode-994669-m03 has no CIDR, ignoring
	
	
	==> kube-apiserver [3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c] <==
	I0318 13:46:02.003881       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:46:02.009400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:46:02.009495       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:46:02.140755       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:46:02.193495       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:46:02.194525       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:46:02.194594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:46:02.200391       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:46:02.200910       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:46:02.201422       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:46:02.204352       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:46:02.204484       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:46:02.204521       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:46:02.204546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:46:02.204568       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:46:02.213580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0318 13:46:02.217333       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 13:46:03.001242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:46:04.896714       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:46:05.032953       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:46:05.044957       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:46:05.129924       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:46:05.141645       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:46:14.961781       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:46:15.057614       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d] <==
	I0318 13:44:20.139542       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0318 13:44:20.139438       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0318 13:44:20.139578       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0318 13:44:20.139608       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:44:20.140366       1 controller.go:162] Shutting down OpenAPI controller
	I0318 13:44:20.140437       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0318 13:44:20.139569       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0318 13:44:20.140377       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0318 13:44:20.139633       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0318 13:44:20.139640       1 establishing_controller.go:87] Shutting down EstablishingController
	I0318 13:44:20.140278       1 naming_controller.go:302] Shutting down NamingConditionController
	W0318 13:44:20.142108       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.142184       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.142941       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143018       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143054       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143088       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143124       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143621       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0318 13:44:20.144360       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W0318 13:44:20.144701       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.144881       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.144961       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.145002       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.145067       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66] <==
	I0318 13:46:39.277282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.900811ms"
	I0318 13:46:39.277640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="92.975µs"
	I0318 13:46:44.914068       1 event.go:307] "Event occurred" object="multinode-994669-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-994669-m02 event: Removing Node multinode-994669-m02 from Controller"
	I0318 13:46:45.055070       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m02\" does not exist"
	I0318 13:46:45.055540       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8cd7k" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8cd7k"
	I0318 13:46:45.069840       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:46:45.329859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="106.888µs"
	I0318 13:46:45.546592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.625µs"
	I0318 13:46:45.599610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.564µs"
	I0318 13:46:45.617787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="68.863µs"
	I0318 13:46:45.618559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.226µs"
	I0318 13:46:45.626282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="114.007µs"
	I0318 13:46:45.629846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.321µs"
	I0318 13:46:49.915913       1 event.go:307] "Event occurred" object="multinode-994669-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-994669-m02 event: Registered Node multinode-994669-m02 in Controller"
	I0318 13:46:52.223055       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:46:52.243957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="45.415µs"
	I0318 13:46:52.261068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.514µs"
	I0318 13:46:54.777115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.031162ms"
	I0318 13:46:54.777535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="150.714µs"
	I0318 13:46:54.933017       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ngqq9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ngqq9"
	I0318 13:47:11.619052       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:47:14.347967       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:47:14.348533       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m03\" does not exist"
	I0318 13:47:14.361958       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:47:21.374308       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m03"
	
	
	==> kube-controller-manager [b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b] <==
	I0318 13:40:48.584638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.630168ms"
	I0318 13:40:48.585576       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="30.767µs"
	I0318 13:41:23.142994       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:41:23.149496       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m03\" does not exist"
	I0318 13:41:23.169963       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6k8dh"
	I0318 13:41:23.176397       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:41:23.176856       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff8vd"
	I0318 13:41:26.692064       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-994669-m03"
	I0318 13:41:26.692393       1 event.go:307] "Event occurred" object="multinode-994669-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-994669-m03 event: Registered Node multinode-994669-m03 in Controller"
	I0318 13:41:32.974004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:03.686720       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:06.114758       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m03\" does not exist"
	I0318 13:42:06.115936       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:06.128295       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:42:13.401669       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:56.760359       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:56.761096       1 event.go:307] "Event occurred" object="multinode-994669-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-994669-m03 status is now: NodeNotReady"
	I0318 13:42:56.778848       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ff8vd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:42:56.797895       1 event.go:307] "Event occurred" object="kube-system/kindnet-6k8dh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.808750       1 event.go:307] "Event occurred" object="multinode-994669-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-994669-m02 status is now: NodeNotReady"
	I0318 13:43:01.821034       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-pxm42" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.838997       1 event.go:307] "Event occurred" object="kube-system/kindnet-zhkmw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.861480       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8cd7k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.876783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="20.870424ms"
	I0318 13:43:01.877674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="135.292µs"
	
	
	==> kube-proxy [7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e] <==
	I0318 13:39:58.364112       1 server_others.go:69] "Using iptables proxy"
	I0318 13:39:58.386912       1 node.go:141] Successfully retrieved node IP: 192.168.39.57
	I0318 13:39:58.432530       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:39:58.432570       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:39:58.435266       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:39:58.435371       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:39:58.436056       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:39:58.436097       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:39:58.438322       1 config.go:188] "Starting service config controller"
	I0318 13:39:58.438743       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:39:58.438837       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:39:58.438861       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:39:58.440137       1 config.go:315] "Starting node config controller"
	I0318 13:39:58.440173       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:39:58.539777       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:39:58.539833       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:39:58.540386       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064] <==
	I0318 13:46:03.450018       1 server_others.go:69] "Using iptables proxy"
	I0318 13:46:03.505830       1 node.go:141] Successfully retrieved node IP: 192.168.39.57
	I0318 13:46:03.575372       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:46:03.575398       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:46:03.581951       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:46:03.582069       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:46:03.582326       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:46:03.582338       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:46:03.584472       1 config.go:188] "Starting service config controller"
	I0318 13:46:03.584598       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:46:03.584683       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:46:03.584732       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:46:03.585341       1 config.go:315] "Starting node config controller"
	I0318 13:46:03.585410       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:46:03.686858       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:46:03.687027       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:46:03.687052       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558] <==
	E0318 13:39:40.713641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:39:40.713691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:39:40.713802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:39:40.713906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:39:40.713969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:39:41.552593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:39:41.552695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:39:41.625046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:39:41.625088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:39:41.714489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:39:41.714573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:39:41.747876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:39:41.748016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:39:41.803738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:39:41.803869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:39:41.972829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:39:41.973054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:39:42.003147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:39:42.003199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:39:42.026066       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:39:42.026118       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:39:44.397642       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:44:20.100769       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:44:20.100956       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0318 13:44:20.102076       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c] <==
	I0318 13:45:59.375052       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:46:02.093794       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:46:02.093901       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:46:02.093913       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:46:02.093921       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:46:02.141521       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:46:02.141558       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:46:02.143500       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:46:02.143764       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:46:02.143812       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:46:02.143862       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:46:02.245174       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.493806    3056 topology_manager.go:215] "Topology Admit Handler" podUID="d46ff588-70f2-4b72-8951-c1d1518d7bd0" podNamespace="kube-system" podName="kube-proxy-f9tgg"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.493891    3056 topology_manager.go:215] "Topology Admit Handler" podUID="48b3e25f-c978-46aa-b8d5-d40371519a5e" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.493958    3056 topology_manager.go:215] "Topology Admit Handler" podUID="2d344d9f-d488-4d70-8e7a-bfbd1f4724b0" podNamespace="default" podName="busybox-5b5d89c9d6-4nbjw"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.509023    3056 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.609672    3056 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d46ff588-70f2-4b72-8951-c1d1518d7bd0-lib-modules\") pod \"kube-proxy-f9tgg\" (UID: \"d46ff588-70f2-4b72-8951-c1d1518d7bd0\") " pod="kube-system/kube-proxy-f9tgg"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.609975    3056 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b18f931-1481-4999-9ff1-89fc4a11f2ec-cni-cfg\") pod \"kindnet-m8hth\" (UID: \"9b18f931-1481-4999-9ff1-89fc4a11f2ec\") " pod="kube-system/kindnet-m8hth"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.610030    3056 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b18f931-1481-4999-9ff1-89fc4a11f2ec-lib-modules\") pod \"kindnet-m8hth\" (UID: \"9b18f931-1481-4999-9ff1-89fc4a11f2ec\") " pod="kube-system/kindnet-m8hth"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.610103    3056 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48b3e25f-c978-46aa-b8d5-d40371519a5e-tmp\") pod \"storage-provisioner\" (UID: \"48b3e25f-c978-46aa-b8d5-d40371519a5e\") " pod="kube-system/storage-provisioner"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.610175    3056 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d46ff588-70f2-4b72-8951-c1d1518d7bd0-xtables-lock\") pod \"kube-proxy-f9tgg\" (UID: \"d46ff588-70f2-4b72-8951-c1d1518d7bd0\") " pod="kube-system/kube-proxy-f9tgg"
	Mar 18 13:46:02 multinode-994669 kubelet[3056]: I0318 13:46:02.611195    3056 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b18f931-1481-4999-9ff1-89fc4a11f2ec-xtables-lock\") pod \"kindnet-m8hth\" (UID: \"9b18f931-1481-4999-9ff1-89fc4a11f2ec\") " pod="kube-system/kindnet-m8hth"
	Mar 18 13:46:11 multinode-994669 kubelet[3056]: I0318 13:46:11.743932    3056 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.536982    3056 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:46:57 multinode-994669 kubelet[3056]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:46:57 multinode-994669 kubelet[3056]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:46:57 multinode-994669 kubelet[3056]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:46:57 multinode-994669 kubelet[3056]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.620192    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd8d8d68bba4c5da05ae4e5388cfe771f/crio-8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d: Error finding container 8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d: Status 404 returned error can't find the container with id 8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.620883    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podd46ff588-70f2-4b72-8951-c1d1518d7bd0/crio-1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb: Error finding container 1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb: Status 404 returned error can't find the container with id 1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.621436    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod87d45fc5ff8300974beb759dc4755c67/crio-2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d: Error finding container 2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d: Status 404 returned error can't find the container with id 2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.622011    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda947eabccfd6fe8f857f455d2bd38fd0/crio-503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e: Error finding container 503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e: Status 404 returned error can't find the container with id 503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.622744    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod48b3e25f-c978-46aa-b8d5-d40371519a5e/crio-5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804: Error finding container 5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804: Status 404 returned error can't find the container with id 5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.623510    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd6e47bdd6c27ccb21f6946ff8943791b/crio-44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86: Error finding container 44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86: Status 404 returned error can't find the container with id 44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.624301    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod2d344d9f-d488-4d70-8e7a-bfbd1f4724b0/crio-15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f: Error finding container 15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f: Status 404 returned error can't find the container with id 15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.624783    3056 manager.go:1106] Failed to create existing container: /kubepods/pod9b18f931-1481-4999-9ff1-89fc4a11f2ec/crio-0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc: Error finding container 0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc: Status 404 returned error can't find the container with id 0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc
	Mar 18 13:46:57 multinode-994669 kubelet[3056]: E0318 13:46:57.625359    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod94537a54-a7ff-4e1f-bf71-43d66bc78138/crio-fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e: Error finding container fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e: Status 404 returned error can't find the container with id fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:47:24.127453 1103043 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18427-1067917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-994669 -n multinode-994669
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-994669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (309.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 stop
E0318 13:47:37.320383 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:49:17.918496 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-994669 stop: exit status 82 (2m0.48961024s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-994669-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-994669 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-994669 status: exit status 3 (18.824220422s)

                                                
                                                
-- stdout --
	multinode-994669
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994669-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:49:47.876293 1103591 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0318 13:49:47.876334 1103591 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-994669 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-994669 -n multinode-994669
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-994669 logs -n 25: (1.671319313s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669:/home/docker/cp-test_multinode-994669-m02_multinode-994669.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669 sudo cat                                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m02_multinode-994669.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03:/home/docker/cp-test_multinode-994669-m02_multinode-994669-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669-m03 sudo cat                                   | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m02_multinode-994669-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp testdata/cp-test.txt                                                | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile486103846/001/cp-test_multinode-994669-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669:/home/docker/cp-test_multinode-994669-m03_multinode-994669.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669 sudo cat                                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m03_multinode-994669.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt                       | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m02:/home/docker/cp-test_multinode-994669-m03_multinode-994669-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n                                                                 | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | multinode-994669-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-994669 ssh -n multinode-994669-m02 sudo cat                                   | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | /home/docker/cp-test_multinode-994669-m03_multinode-994669-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-994669 node stop m03                                                          | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| node    | multinode-994669 node start                                                             | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-994669                                                                | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	| stop    | -p multinode-994669                                                                     | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	| start   | -p multinode-994669                                                                     | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-994669                                                                | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:47 UTC |                     |
	| node    | multinode-994669 node delete                                                            | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:47 UTC | 18 Mar 24 13:47 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-994669 stop                                                                   | multinode-994669 | jenkins | v1.32.0 | 18 Mar 24 13:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:44:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:44:19.199941 1102226 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:44:19.200230 1102226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:44:19.200240 1102226 out.go:304] Setting ErrFile to fd 2...
	I0318 13:44:19.200245 1102226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:44:19.200438 1102226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:44:19.201009 1102226 out.go:298] Setting JSON to false
	I0318 13:44:19.202053 1102226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":19606,"bootTime":1710749853,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:44:19.202127 1102226 start.go:139] virtualization: kvm guest
	I0318 13:44:19.206265 1102226 out.go:177] * [multinode-994669] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:44:19.208081 1102226 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:44:19.208041 1102226 notify.go:220] Checking for updates...
	I0318 13:44:19.209533 1102226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:44:19.210959 1102226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:44:19.212418 1102226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:44:19.213841 1102226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:44:19.215181 1102226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:44:19.216918 1102226 config.go:182] Loaded profile config "multinode-994669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:44:19.217025 1102226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:44:19.217529 1102226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:44:19.217580 1102226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:44:19.235210 1102226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0318 13:44:19.235731 1102226 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:44:19.236331 1102226 main.go:141] libmachine: Using API Version  1
	I0318 13:44:19.236356 1102226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:44:19.236695 1102226 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:44:19.236917 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:44:19.272789 1102226 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:44:19.274275 1102226 start.go:297] selected driver: kvm2
	I0318 13:44:19.274307 1102226 start.go:901] validating driver "kvm2" against &{Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:44:19.274507 1102226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:44:19.274935 1102226 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:44:19.275059 1102226 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:44:19.290693 1102226 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:44:19.291663 1102226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:44:19.291748 1102226 cni.go:84] Creating CNI manager for ""
	I0318 13:44:19.291764 1102226 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:44:19.291864 1102226 start.go:340] cluster config:
	{Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-994669 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:44:19.292082 1102226 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:44:19.293889 1102226 out.go:177] * Starting "multinode-994669" primary control-plane node in "multinode-994669" cluster
	I0318 13:44:19.295066 1102226 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:44:19.295106 1102226 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:44:19.295117 1102226 cache.go:56] Caching tarball of preloaded images
	I0318 13:44:19.295191 1102226 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:44:19.295203 1102226 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:44:19.295326 1102226 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/config.json ...
	I0318 13:44:19.295556 1102226 start.go:360] acquireMachinesLock for multinode-994669: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:44:19.295604 1102226 start.go:364] duration metric: took 27.674µs to acquireMachinesLock for "multinode-994669"
	I0318 13:44:19.295620 1102226 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:44:19.295626 1102226 fix.go:54] fixHost starting: 
	I0318 13:44:19.295908 1102226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:44:19.295941 1102226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:44:19.310477 1102226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0318 13:44:19.310941 1102226 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:44:19.311399 1102226 main.go:141] libmachine: Using API Version  1
	I0318 13:44:19.311419 1102226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:44:19.311762 1102226 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:44:19.312007 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:44:19.312212 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetState
	I0318 13:44:19.314116 1102226 fix.go:112] recreateIfNeeded on multinode-994669: state=Running err=<nil>
	W0318 13:44:19.314149 1102226 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:44:19.316767 1102226 out.go:177] * Updating the running kvm2 "multinode-994669" VM ...
	I0318 13:44:19.317975 1102226 machine.go:94] provisionDockerMachine start ...
	I0318 13:44:19.317996 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:44:19.318220 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.320746 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.321187 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.321218 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.321314 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.321504 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.321639 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.321794 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.321949 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.322142 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.322155 1102226 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:44:19.441727 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994669
	
	I0318 13:44:19.441761 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetMachineName
	I0318 13:44:19.442032 1102226 buildroot.go:166] provisioning hostname "multinode-994669"
	I0318 13:44:19.442060 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetMachineName
	I0318 13:44:19.442290 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.445337 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.445739 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.445781 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.445965 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.446167 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.446358 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.446516 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.446739 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.446965 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.446983 1102226 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-994669 && echo "multinode-994669" | sudo tee /etc/hostname
	I0318 13:44:19.578079 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994669
	
	I0318 13:44:19.578117 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.581055 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.581434 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.581500 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.581691 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.581915 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.582094 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.582236 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.582434 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.582614 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.582631 1102226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-994669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-994669/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-994669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:44:19.705336 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:44:19.705367 1102226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 13:44:19.705392 1102226 buildroot.go:174] setting up certificates
	I0318 13:44:19.705403 1102226 provision.go:84] configureAuth start
	I0318 13:44:19.705412 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetMachineName
	I0318 13:44:19.705697 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:44:19.708563 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.708988 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.709016 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.709181 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.711417 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.711777 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.711811 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.711966 1102226 provision.go:143] copyHostCerts
	I0318 13:44:19.712014 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:44:19.712056 1102226 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 13:44:19.712065 1102226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 13:44:19.712131 1102226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 13:44:19.712204 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:44:19.712221 1102226 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 13:44:19.712228 1102226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 13:44:19.712252 1102226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 13:44:19.712289 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:44:19.712310 1102226 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 13:44:19.712316 1102226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 13:44:19.712336 1102226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 13:44:19.712379 1102226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.multinode-994669 san=[127.0.0.1 192.168.39.57 localhost minikube multinode-994669]
	I0318 13:44:19.769536 1102226 provision.go:177] copyRemoteCerts
	I0318 13:44:19.769608 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:44:19.769635 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.772426 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.772783 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.772811 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.773038 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.773260 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.773410 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.773542 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:44:19.865378 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:44:19.865469 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 13:44:19.894383 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:44:19.894448 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:44:19.926069 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:44:19.926159 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:44:19.962159 1102226 provision.go:87] duration metric: took 256.743488ms to configureAuth
	I0318 13:44:19.962189 1102226 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:44:19.962455 1102226 config.go:182] Loaded profile config "multinode-994669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:44:19.962554 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:44:19.965329 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.965809 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:44:19.965854 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:44:19.965996 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:44:19.966198 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.966392 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:44:19.966545 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:44:19.966717 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:44:19.966891 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:44:19.966905 1102226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:45:50.678830 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:45:50.678862 1102226 machine.go:97] duration metric: took 1m31.360872215s to provisionDockerMachine
	I0318 13:45:50.678878 1102226 start.go:293] postStartSetup for "multinode-994669" (driver="kvm2")
	I0318 13:45:50.678893 1102226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:45:50.678924 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.679295 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:45:50.679326 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.682700 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.683116 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.683158 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.683299 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.683496 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.683658 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.683876 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:45:50.772430 1102226 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:45:50.776936 1102226 command_runner.go:130] > NAME=Buildroot
	I0318 13:45:50.776953 1102226 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:45:50.776957 1102226 command_runner.go:130] > ID=buildroot
	I0318 13:45:50.776961 1102226 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:45:50.776966 1102226 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:45:50.776994 1102226 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:45:50.777007 1102226 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 13:45:50.777066 1102226 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 13:45:50.777150 1102226 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 13:45:50.777161 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /etc/ssl/certs/10752082.pem
	I0318 13:45:50.777265 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:45:50.787501 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:45:50.813020 1102226 start.go:296] duration metric: took 134.123266ms for postStartSetup
	I0318 13:45:50.813076 1102226 fix.go:56] duration metric: took 1m31.517450336s for fixHost
	I0318 13:45:50.813102 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.816199 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.816549 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.816591 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.816701 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.816909 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.817105 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.817233 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.817386 1102226 main.go:141] libmachine: Using SSH client type: native
	I0318 13:45:50.817561 1102226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0318 13:45:50.817572 1102226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:45:50.928786 1102226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769550.906873363
	
	I0318 13:45:50.928815 1102226 fix.go:216] guest clock: 1710769550.906873363
	I0318 13:45:50.928823 1102226 fix.go:229] Guest: 2024-03-18 13:45:50.906873363 +0000 UTC Remote: 2024-03-18 13:45:50.813081995 +0000 UTC m=+91.663053370 (delta=93.791368ms)
	I0318 13:45:50.928861 1102226 fix.go:200] guest clock delta is within tolerance: 93.791368ms
	I0318 13:45:50.928869 1102226 start.go:83] releasing machines lock for "multinode-994669", held for 1m31.633255129s
	I0318 13:45:50.928890 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.929204 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:45:50.932364 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.932843 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.932880 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.933021 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.933609 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.933845 1102226 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:45:50.933968 1102226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:45:50.934014 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.934107 1102226 ssh_runner.go:195] Run: cat /version.json
	I0318 13:45:50.934135 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:45:50.936967 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.937312 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.937345 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.937366 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.937534 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.937726 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.937889 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.937926 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:50.937952 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:50.938052 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:45:50.938076 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:45:50.938191 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:45:50.938307 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:45:50.938418 1102226 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:45:51.053214 1102226 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:45:51.054036 1102226 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 13:45:51.054237 1102226 ssh_runner.go:195] Run: systemctl --version
	I0318 13:45:51.060242 1102226 command_runner.go:130] > systemd 252 (252)
	I0318 13:45:51.060277 1102226 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 13:45:51.060478 1102226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:45:51.220374 1102226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:45:51.231207 1102226 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 13:45:51.231297 1102226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:45:51.231370 1102226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:45:51.242179 1102226 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:45:51.242212 1102226 start.go:494] detecting cgroup driver to use...
	I0318 13:45:51.242293 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:45:51.260556 1102226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:45:51.276925 1102226 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:45:51.277000 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:45:51.293505 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:45:51.308566 1102226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:45:51.463621 1102226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:45:51.603848 1102226 docker.go:233] disabling docker service ...
	I0318 13:45:51.603928 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:45:51.622182 1102226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:45:51.637427 1102226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:45:51.775666 1102226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:45:51.919491 1102226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:45:51.936464 1102226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:45:51.956829 1102226 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0318 13:45:51.957218 1102226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:45:51.957285 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:51.969747 1102226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:45:51.969829 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:51.981655 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:51.993332 1102226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:45:52.005015 1102226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:45:52.017136 1102226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:45:52.027437 1102226 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:45:52.027561 1102226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:45:52.038895 1102226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:52.176127 1102226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:45:54.775377 1102226 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.599203839s)
	I0318 13:45:54.775428 1102226 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:45:54.775494 1102226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:45:54.780512 1102226 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0318 13:45:54.780536 1102226 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:45:54.780551 1102226 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0318 13:45:54.780561 1102226 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:45:54.780570 1102226 command_runner.go:130] > Access: 2024-03-18 13:45:54.628328888 +0000
	I0318 13:45:54.780590 1102226 command_runner.go:130] > Modify: 2024-03-18 13:45:54.628328888 +0000
	I0318 13:45:54.780601 1102226 command_runner.go:130] > Change: 2024-03-18 13:45:54.628328888 +0000
	I0318 13:45:54.780607 1102226 command_runner.go:130] >  Birth: -
	I0318 13:45:54.780649 1102226 start.go:562] Will wait 60s for crictl version
	I0318 13:45:54.780713 1102226 ssh_runner.go:195] Run: which crictl
	I0318 13:45:54.784680 1102226 command_runner.go:130] > /usr/bin/crictl
	I0318 13:45:54.784764 1102226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:45:54.824404 1102226 command_runner.go:130] > Version:  0.1.0
	I0318 13:45:54.824428 1102226 command_runner.go:130] > RuntimeName:  cri-o
	I0318 13:45:54.824432 1102226 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0318 13:45:54.824437 1102226 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:45:54.824621 1102226 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:45:54.824693 1102226 ssh_runner.go:195] Run: crio --version
	I0318 13:45:54.854273 1102226 command_runner.go:130] > crio version 1.29.1
	I0318 13:45:54.854304 1102226 command_runner.go:130] > Version:        1.29.1
	I0318 13:45:54.854313 1102226 command_runner.go:130] > GitCommit:      unknown
	I0318 13:45:54.854318 1102226 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:45:54.854324 1102226 command_runner.go:130] > GitTreeState:   clean
	I0318 13:45:54.854329 1102226 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:45:54.854334 1102226 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:45:54.854338 1102226 command_runner.go:130] > Compiler:       gc
	I0318 13:45:54.854342 1102226 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:45:54.854346 1102226 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:45:54.854351 1102226 command_runner.go:130] > BuildTags:      
	I0318 13:45:54.854355 1102226 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:45:54.854360 1102226 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:45:54.854364 1102226 command_runner.go:130] >   btrfs_noversion
	I0318 13:45:54.854368 1102226 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:45:54.854376 1102226 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:45:54.854379 1102226 command_runner.go:130] >   seccomp
	I0318 13:45:54.854383 1102226 command_runner.go:130] > LDFlags:          unknown
	I0318 13:45:54.854387 1102226 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:45:54.854391 1102226 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:45:54.854460 1102226 ssh_runner.go:195] Run: crio --version
	I0318 13:45:54.883884 1102226 command_runner.go:130] > crio version 1.29.1
	I0318 13:45:54.883923 1102226 command_runner.go:130] > Version:        1.29.1
	I0318 13:45:54.883932 1102226 command_runner.go:130] > GitCommit:      unknown
	I0318 13:45:54.883939 1102226 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:45:54.883945 1102226 command_runner.go:130] > GitTreeState:   clean
	I0318 13:45:54.883954 1102226 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:45:54.883960 1102226 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:45:54.883965 1102226 command_runner.go:130] > Compiler:       gc
	I0318 13:45:54.883972 1102226 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:45:54.883979 1102226 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:45:54.883994 1102226 command_runner.go:130] > BuildTags:      
	I0318 13:45:54.884005 1102226 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:45:54.884015 1102226 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:45:54.884024 1102226 command_runner.go:130] >   btrfs_noversion
	I0318 13:45:54.884035 1102226 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:45:54.884044 1102226 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:45:54.884054 1102226 command_runner.go:130] >   seccomp
	I0318 13:45:54.884063 1102226 command_runner.go:130] > LDFlags:          unknown
	I0318 13:45:54.884070 1102226 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:45:54.884079 1102226 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:45:54.888270 1102226 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:45:54.889848 1102226 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:45:54.892709 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:54.893067 1102226 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:45:54.893096 1102226 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:45:54.893307 1102226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:45:54.897749 1102226 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0318 13:45:54.897836 1102226 kubeadm.go:877] updating cluster {Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:45:54.897969 1102226 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:54.898017 1102226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:45:54.951895 1102226 command_runner.go:130] > {
	I0318 13:45:54.951921 1102226 command_runner.go:130] >   "images": [
	I0318 13:45:54.951927 1102226 command_runner.go:130] >     {
	I0318 13:45:54.951940 1102226 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:45:54.951949 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.951958 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:45:54.951968 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.951974 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.951982 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:45:54.951990 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:45:54.951997 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952001 1102226 command_runner.go:130] >       "size": "65258016",
	I0318 13:45:54.952005 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952012 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952020 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952028 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952035 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952049 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952059 1102226 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:45:54.952066 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952074 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:45:54.952080 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952084 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952091 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:45:54.952106 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:45:54.952115 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952122 1102226 command_runner.go:130] >       "size": "65291810",
	I0318 13:45:54.952132 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952157 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952168 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952173 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952176 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952181 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952194 1102226 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:45:54.952204 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952216 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:45:54.952231 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952241 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952255 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:45:54.952266 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:45:54.952274 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952285 1102226 command_runner.go:130] >       "size": "1363676",
	I0318 13:45:54.952295 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952304 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952313 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952322 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952331 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952339 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952347 1102226 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:45:54.952352 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952359 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:45:54.952369 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952379 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952395 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:45:54.952418 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:45:54.952427 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952435 1102226 command_runner.go:130] >       "size": "31470524",
	I0318 13:45:54.952439 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952449 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952459 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952468 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952477 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952486 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952498 1102226 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:45:54.952508 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952517 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:45:54.952523 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952528 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952544 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:45:54.952559 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:45:54.952568 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952576 1102226 command_runner.go:130] >       "size": "53621675",
	I0318 13:45:54.952599 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.952607 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952611 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952621 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952630 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952638 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952650 1102226 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:45:54.952659 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952769 1102226 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:45:54.952791 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952803 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.952818 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:45:54.952835 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:45:54.952845 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.952857 1102226 command_runner.go:130] >       "size": "295456551",
	I0318 13:45:54.952868 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.952879 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.952890 1102226 command_runner.go:130] >       },
	I0318 13:45:54.952901 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.952911 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.952922 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.952933 1102226 command_runner.go:130] >     },
	I0318 13:45:54.952950 1102226 command_runner.go:130] >     {
	I0318 13:45:54.952965 1102226 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:45:54.952977 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.952990 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:45:54.953001 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953012 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953028 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:45:54.953046 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:45:54.953057 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953071 1102226 command_runner.go:130] >       "size": "127226832",
	I0318 13:45:54.953083 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953094 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.953105 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953116 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953143 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953155 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953166 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953176 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953188 1102226 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:45:54.953199 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953213 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:45:54.953224 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953232 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953269 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:45:54.953284 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:45:54.953300 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953315 1102226 command_runner.go:130] >       "size": "123261750",
	I0318 13:45:54.953324 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953332 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.953342 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953351 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953361 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953367 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953374 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953385 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953397 1102226 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:45:54.953408 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953417 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:45:54.953424 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953431 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953444 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:45:54.953453 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:45:54.953458 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953465 1102226 command_runner.go:130] >       "size": "74749335",
	I0318 13:45:54.953472 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.953479 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953486 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953493 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953499 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953505 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953523 1102226 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:45:54.953531 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953538 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:45:54.953542 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953558 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953571 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:45:54.953584 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:45:54.953591 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953599 1102226 command_runner.go:130] >       "size": "61551410",
	I0318 13:45:54.953606 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953618 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.953624 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953634 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953642 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953653 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.953660 1102226 command_runner.go:130] >     },
	I0318 13:45:54.953671 1102226 command_runner.go:130] >     {
	I0318 13:45:54.953703 1102226 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:45:54.953728 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.953741 1102226 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:45:54.953751 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953758 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.953775 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:45:54.953790 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:45:54.953801 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.953809 1102226 command_runner.go:130] >       "size": "750414",
	I0318 13:45:54.953819 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.953824 1102226 command_runner.go:130] >         "value": "65535"
	I0318 13:45:54.953830 1102226 command_runner.go:130] >       },
	I0318 13:45:54.953837 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.953849 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.953858 1102226 command_runner.go:130] >       "pinned": true
	I0318 13:45:54.953874 1102226 command_runner.go:130] >     }
	I0318 13:45:54.953880 1102226 command_runner.go:130] >   ]
	I0318 13:45:54.953887 1102226 command_runner.go:130] > }
	I0318 13:45:54.954188 1102226 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:45:54.954207 1102226 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:45:54.954276 1102226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:45:54.994088 1102226 command_runner.go:130] > {
	I0318 13:45:54.994112 1102226 command_runner.go:130] >   "images": [
	I0318 13:45:54.994115 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994123 1102226 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:45:54.994127 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994133 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:45:54.994137 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994141 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994157 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:45:54.994170 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:45:54.994174 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994179 1102226 command_runner.go:130] >       "size": "65258016",
	I0318 13:45:54.994183 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994187 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994195 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994200 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994203 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994206 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994213 1102226 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:45:54.994219 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994225 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:45:54.994229 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994233 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994240 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:45:54.994248 1102226 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:45:54.994252 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994256 1102226 command_runner.go:130] >       "size": "65291810",
	I0318 13:45:54.994263 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994270 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994275 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994281 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994285 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994288 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994293 1102226 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:45:54.994298 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994303 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:45:54.994307 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994314 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994320 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:45:54.994327 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:45:54.994331 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994335 1102226 command_runner.go:130] >       "size": "1363676",
	I0318 13:45:54.994338 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994342 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994346 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994357 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994362 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994365 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994371 1102226 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:45:54.994376 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994381 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:45:54.994384 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994388 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994396 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:45:54.994410 1102226 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:45:54.994421 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994425 1102226 command_runner.go:130] >       "size": "31470524",
	I0318 13:45:54.994428 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994431 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994435 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994439 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994442 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994446 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994452 1102226 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:45:54.994456 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994461 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:45:54.994464 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994471 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994478 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:45:54.994485 1102226 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:45:54.994490 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994495 1102226 command_runner.go:130] >       "size": "53621675",
	I0318 13:45:54.994501 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994504 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994508 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994512 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994516 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994519 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994525 1102226 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:45:54.994537 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994542 1102226 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:45:54.994552 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994559 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994566 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:45:54.994575 1102226 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:45:54.994579 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994582 1102226 command_runner.go:130] >       "size": "295456551",
	I0318 13:45:54.994585 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994589 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994595 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994599 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994605 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994609 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994613 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994618 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994626 1102226 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:45:54.994632 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994637 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:45:54.994643 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994648 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994659 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:45:54.994670 1102226 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:45:54.994676 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994683 1102226 command_runner.go:130] >       "size": "127226832",
	I0318 13:45:54.994690 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994693 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994697 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994701 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994708 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994712 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994715 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994718 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994724 1102226 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:45:54.994730 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994736 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:45:54.994739 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994743 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994774 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:45:54.994785 1102226 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:45:54.994788 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994792 1102226 command_runner.go:130] >       "size": "123261750",
	I0318 13:45:54.994796 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994799 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994803 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994807 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994810 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994814 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994818 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994821 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994828 1102226 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:45:54.994832 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994837 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:45:54.994843 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994847 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994856 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:45:54.994863 1102226 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:45:54.994870 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994874 1102226 command_runner.go:130] >       "size": "74749335",
	I0318 13:45:54.994878 1102226 command_runner.go:130] >       "uid": null,
	I0318 13:45:54.994884 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994888 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994892 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994897 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994900 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994908 1102226 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:45:54.994912 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.994919 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:45:54.994922 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994926 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.994936 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:45:54.994946 1102226 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:45:54.994949 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.994953 1102226 command_runner.go:130] >       "size": "61551410",
	I0318 13:45:54.994965 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.994972 1102226 command_runner.go:130] >         "value": "0"
	I0318 13:45:54.994975 1102226 command_runner.go:130] >       },
	I0318 13:45:54.994979 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.994983 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.994986 1102226 command_runner.go:130] >       "pinned": false
	I0318 13:45:54.994990 1102226 command_runner.go:130] >     },
	I0318 13:45:54.994993 1102226 command_runner.go:130] >     {
	I0318 13:45:54.994999 1102226 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:45:54.995004 1102226 command_runner.go:130] >       "repoTags": [
	I0318 13:45:54.995008 1102226 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:45:54.995013 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.995017 1102226 command_runner.go:130] >       "repoDigests": [
	I0318 13:45:54.995026 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:45:54.995035 1102226 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:45:54.995039 1102226 command_runner.go:130] >       ],
	I0318 13:45:54.995043 1102226 command_runner.go:130] >       "size": "750414",
	I0318 13:45:54.995049 1102226 command_runner.go:130] >       "uid": {
	I0318 13:45:54.995053 1102226 command_runner.go:130] >         "value": "65535"
	I0318 13:45:54.995056 1102226 command_runner.go:130] >       },
	I0318 13:45:54.995062 1102226 command_runner.go:130] >       "username": "",
	I0318 13:45:54.995066 1102226 command_runner.go:130] >       "spec": null,
	I0318 13:45:54.995070 1102226 command_runner.go:130] >       "pinned": true
	I0318 13:45:54.995075 1102226 command_runner.go:130] >     }
	I0318 13:45:54.995078 1102226 command_runner.go:130] >   ]
	I0318 13:45:54.995082 1102226 command_runner.go:130] > }
	I0318 13:45:54.995675 1102226 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:45:54.995692 1102226 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:45:54.995709 1102226 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.28.4 crio true true} ...
	I0318 13:45:54.995839 1102226 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-994669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:45:54.995911 1102226 ssh_runner.go:195] Run: crio config
	I0318 13:45:55.047760 1102226 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0318 13:45:55.047793 1102226 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0318 13:45:55.047802 1102226 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0318 13:45:55.047807 1102226 command_runner.go:130] > #
	I0318 13:45:55.047817 1102226 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0318 13:45:55.047835 1102226 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0318 13:45:55.047860 1102226 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0318 13:45:55.047871 1102226 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0318 13:45:55.047877 1102226 command_runner.go:130] > # reload'.
	I0318 13:45:55.047883 1102226 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0318 13:45:55.047899 1102226 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0318 13:45:55.047910 1102226 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0318 13:45:55.047916 1102226 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0318 13:45:55.047920 1102226 command_runner.go:130] > [crio]
	I0318 13:45:55.047925 1102226 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0318 13:45:55.047931 1102226 command_runner.go:130] > # containers images, in this directory.
	I0318 13:45:55.047940 1102226 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0318 13:45:55.047957 1102226 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0318 13:45:55.048131 1102226 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0318 13:45:55.048154 1102226 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0318 13:45:55.048277 1102226 command_runner.go:130] > # imagestore = ""
	I0318 13:45:55.048302 1102226 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0318 13:45:55.048311 1102226 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0318 13:45:55.048396 1102226 command_runner.go:130] > storage_driver = "overlay"
	I0318 13:45:55.048411 1102226 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0318 13:45:55.048421 1102226 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0318 13:45:55.048427 1102226 command_runner.go:130] > storage_option = [
	I0318 13:45:55.048583 1102226 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0318 13:45:55.048612 1102226 command_runner.go:130] > ]
	I0318 13:45:55.048631 1102226 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0318 13:45:55.048644 1102226 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0318 13:45:55.048870 1102226 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0318 13:45:55.048885 1102226 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0318 13:45:55.048895 1102226 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0318 13:45:55.048905 1102226 command_runner.go:130] > # always happen on a node reboot
	I0318 13:45:55.049258 1102226 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0318 13:45:55.049281 1102226 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0318 13:45:55.049295 1102226 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0318 13:45:55.049303 1102226 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0318 13:45:55.049414 1102226 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0318 13:45:55.049431 1102226 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0318 13:45:55.049444 1102226 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0318 13:45:55.049648 1102226 command_runner.go:130] > # internal_wipe = true
	I0318 13:45:55.049664 1102226 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0318 13:45:55.049670 1102226 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0318 13:45:55.049904 1102226 command_runner.go:130] > # internal_repair = false
	I0318 13:45:55.049913 1102226 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0318 13:45:55.049919 1102226 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0318 13:45:55.049924 1102226 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0318 13:45:55.050324 1102226 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0318 13:45:55.050333 1102226 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0318 13:45:55.050337 1102226 command_runner.go:130] > [crio.api]
	I0318 13:45:55.050343 1102226 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0318 13:45:55.050602 1102226 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0318 13:45:55.050617 1102226 command_runner.go:130] > # IP address on which the stream server will listen.
	I0318 13:45:55.050854 1102226 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0318 13:45:55.050865 1102226 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0318 13:45:55.050871 1102226 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0318 13:45:55.051157 1102226 command_runner.go:130] > # stream_port = "0"
	I0318 13:45:55.051170 1102226 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0318 13:45:55.051427 1102226 command_runner.go:130] > # stream_enable_tls = false
	I0318 13:45:55.051438 1102226 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0318 13:45:55.051661 1102226 command_runner.go:130] > # stream_idle_timeout = ""
	I0318 13:45:55.051683 1102226 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0318 13:45:55.051693 1102226 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0318 13:45:55.051702 1102226 command_runner.go:130] > # minutes.
	I0318 13:45:55.051887 1102226 command_runner.go:130] > # stream_tls_cert = ""
	I0318 13:45:55.051904 1102226 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0318 13:45:55.051910 1102226 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0318 13:45:55.052194 1102226 command_runner.go:130] > # stream_tls_key = ""
	I0318 13:45:55.052211 1102226 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0318 13:45:55.052222 1102226 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0318 13:45:55.052254 1102226 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0318 13:45:55.052428 1102226 command_runner.go:130] > # stream_tls_ca = ""
	I0318 13:45:55.052451 1102226 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:45:55.052547 1102226 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0318 13:45:55.052564 1102226 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:45:55.052726 1102226 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0318 13:45:55.052736 1102226 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0318 13:45:55.052742 1102226 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0318 13:45:55.052745 1102226 command_runner.go:130] > [crio.runtime]
	I0318 13:45:55.052753 1102226 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0318 13:45:55.052763 1102226 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0318 13:45:55.052774 1102226 command_runner.go:130] > # "nofile=1024:2048"
	I0318 13:45:55.052784 1102226 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0318 13:45:55.052860 1102226 command_runner.go:130] > # default_ulimits = [
	I0318 13:45:55.053057 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.053067 1102226 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0318 13:45:55.053372 1102226 command_runner.go:130] > # no_pivot = false
	I0318 13:45:55.053386 1102226 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0318 13:45:55.053396 1102226 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0318 13:45:55.055125 1102226 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0318 13:45:55.055137 1102226 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0318 13:45:55.055142 1102226 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0318 13:45:55.055149 1102226 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:45:55.055157 1102226 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0318 13:45:55.055169 1102226 command_runner.go:130] > # Cgroup setting for conmon
	I0318 13:45:55.055181 1102226 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0318 13:45:55.055189 1102226 command_runner.go:130] > conmon_cgroup = "pod"
	I0318 13:45:55.055196 1102226 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0318 13:45:55.055203 1102226 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0318 13:45:55.055209 1102226 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:45:55.055215 1102226 command_runner.go:130] > conmon_env = [
	I0318 13:45:55.055221 1102226 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:45:55.055227 1102226 command_runner.go:130] > ]
	I0318 13:45:55.055232 1102226 command_runner.go:130] > # Additional environment variables to set for all the
	I0318 13:45:55.055240 1102226 command_runner.go:130] > # containers. These are overridden if set in the
	I0318 13:45:55.055253 1102226 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0318 13:45:55.055263 1102226 command_runner.go:130] > # default_env = [
	I0318 13:45:55.055269 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055282 1102226 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0318 13:45:55.055295 1102226 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0318 13:45:55.055301 1102226 command_runner.go:130] > # selinux = false
	I0318 13:45:55.055307 1102226 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0318 13:45:55.055322 1102226 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0318 13:45:55.055331 1102226 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0318 13:45:55.055338 1102226 command_runner.go:130] > # seccomp_profile = ""
	I0318 13:45:55.055346 1102226 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0318 13:45:55.055358 1102226 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0318 13:45:55.055372 1102226 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0318 13:45:55.055383 1102226 command_runner.go:130] > # which might increase security.
	I0318 13:45:55.055394 1102226 command_runner.go:130] > # This option is currently deprecated,
	I0318 13:45:55.055406 1102226 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0318 13:45:55.055413 1102226 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0318 13:45:55.055419 1102226 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0318 13:45:55.055427 1102226 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0318 13:45:55.055435 1102226 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0318 13:45:55.055445 1102226 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0318 13:45:55.055455 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.055467 1102226 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0318 13:45:55.055478 1102226 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0318 13:45:55.055488 1102226 command_runner.go:130] > # the cgroup blockio controller.
	I0318 13:45:55.055498 1102226 command_runner.go:130] > # blockio_config_file = ""
	I0318 13:45:55.055511 1102226 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0318 13:45:55.055520 1102226 command_runner.go:130] > # blockio parameters.
	I0318 13:45:55.055528 1102226 command_runner.go:130] > # blockio_reload = false
	I0318 13:45:55.055534 1102226 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0318 13:45:55.055540 1102226 command_runner.go:130] > # irqbalance daemon.
	I0318 13:45:55.055548 1102226 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0318 13:45:55.055562 1102226 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0318 13:45:55.055576 1102226 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0318 13:45:55.055590 1102226 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0318 13:45:55.055602 1102226 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0318 13:45:55.055615 1102226 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0318 13:45:55.055624 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.055633 1102226 command_runner.go:130] > # rdt_config_file = ""
	I0318 13:45:55.055646 1102226 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0318 13:45:55.055654 1102226 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0318 13:45:55.055692 1102226 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0318 13:45:55.055703 1102226 command_runner.go:130] > # separate_pull_cgroup = ""
	I0318 13:45:55.055719 1102226 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0318 13:45:55.055729 1102226 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0318 13:45:55.055738 1102226 command_runner.go:130] > # will be added.
	I0318 13:45:55.055748 1102226 command_runner.go:130] > # default_capabilities = [
	I0318 13:45:55.055757 1102226 command_runner.go:130] > # 	"CHOWN",
	I0318 13:45:55.055764 1102226 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0318 13:45:55.055773 1102226 command_runner.go:130] > # 	"FSETID",
	I0318 13:45:55.055783 1102226 command_runner.go:130] > # 	"FOWNER",
	I0318 13:45:55.055792 1102226 command_runner.go:130] > # 	"SETGID",
	I0318 13:45:55.055801 1102226 command_runner.go:130] > # 	"SETUID",
	I0318 13:45:55.055810 1102226 command_runner.go:130] > # 	"SETPCAP",
	I0318 13:45:55.055819 1102226 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0318 13:45:55.055836 1102226 command_runner.go:130] > # 	"KILL",
	I0318 13:45:55.055842 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055855 1102226 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0318 13:45:55.055870 1102226 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0318 13:45:55.055880 1102226 command_runner.go:130] > # add_inheritable_capabilities = false
	I0318 13:45:55.055893 1102226 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0318 13:45:55.055905 1102226 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:45:55.055914 1102226 command_runner.go:130] > # default_sysctls = [
	I0318 13:45:55.055921 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055926 1102226 command_runner.go:130] > # List of devices on the host that a
	I0318 13:45:55.055939 1102226 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0318 13:45:55.055950 1102226 command_runner.go:130] > # allowed_devices = [
	I0318 13:45:55.055957 1102226 command_runner.go:130] > # 	"/dev/fuse",
	I0318 13:45:55.055965 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.055976 1102226 command_runner.go:130] > # List of additional devices. specified as
	I0318 13:45:55.055991 1102226 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0318 13:45:55.056001 1102226 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0318 13:45:55.056012 1102226 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:45:55.056020 1102226 command_runner.go:130] > # additional_devices = [
	I0318 13:45:55.056024 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.056032 1102226 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0318 13:45:55.056049 1102226 command_runner.go:130] > # cdi_spec_dirs = [
	I0318 13:45:55.056059 1102226 command_runner.go:130] > # 	"/etc/cdi",
	I0318 13:45:55.056069 1102226 command_runner.go:130] > # 	"/var/run/cdi",
	I0318 13:45:55.056085 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.056098 1102226 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0318 13:45:55.056108 1102226 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0318 13:45:55.056116 1102226 command_runner.go:130] > # Defaults to false.
	I0318 13:45:55.056127 1102226 command_runner.go:130] > # device_ownership_from_security_context = false
	I0318 13:45:55.056141 1102226 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0318 13:45:55.056158 1102226 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0318 13:45:55.056168 1102226 command_runner.go:130] > # hooks_dir = [
	I0318 13:45:55.056178 1102226 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0318 13:45:55.056186 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.056197 1102226 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0318 13:45:55.056218 1102226 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0318 13:45:55.056231 1102226 command_runner.go:130] > # its default mounts from the following two files:
	I0318 13:45:55.056240 1102226 command_runner.go:130] > #
	I0318 13:45:55.056253 1102226 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0318 13:45:55.056266 1102226 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0318 13:45:55.056279 1102226 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0318 13:45:55.056287 1102226 command_runner.go:130] > #
	I0318 13:45:55.056297 1102226 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0318 13:45:55.056310 1102226 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0318 13:45:55.056322 1102226 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0318 13:45:55.056330 1102226 command_runner.go:130] > #      only add mounts it finds in this file.
	I0318 13:45:55.056338 1102226 command_runner.go:130] > #
	I0318 13:45:55.056346 1102226 command_runner.go:130] > # default_mounts_file = ""
	I0318 13:45:55.056357 1102226 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0318 13:45:55.056371 1102226 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0318 13:45:55.056380 1102226 command_runner.go:130] > pids_limit = 1024
	I0318 13:45:55.056393 1102226 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0318 13:45:55.056407 1102226 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0318 13:45:55.056426 1102226 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0318 13:45:55.056442 1102226 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0318 13:45:55.056451 1102226 command_runner.go:130] > # log_size_max = -1
	I0318 13:45:55.056463 1102226 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0318 13:45:55.056473 1102226 command_runner.go:130] > # log_to_journald = false
	I0318 13:45:55.056486 1102226 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0318 13:45:55.056499 1102226 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0318 13:45:55.056516 1102226 command_runner.go:130] > # Path to directory for container attach sockets.
	I0318 13:45:55.056528 1102226 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0318 13:45:55.056539 1102226 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0318 13:45:55.056549 1102226 command_runner.go:130] > # bind_mount_prefix = ""
	I0318 13:45:55.056561 1102226 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0318 13:45:55.056571 1102226 command_runner.go:130] > # read_only = false
	I0318 13:45:55.056585 1102226 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0318 13:45:55.056598 1102226 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0318 13:45:55.056607 1102226 command_runner.go:130] > # live configuration reload.
	I0318 13:45:55.056617 1102226 command_runner.go:130] > # log_level = "info"
	I0318 13:45:55.056628 1102226 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0318 13:45:55.056636 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.056645 1102226 command_runner.go:130] > # log_filter = ""
	I0318 13:45:55.056655 1102226 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0318 13:45:55.056668 1102226 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0318 13:45:55.056678 1102226 command_runner.go:130] > # separated by comma.
	I0318 13:45:55.056694 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056703 1102226 command_runner.go:130] > # uid_mappings = ""
	I0318 13:45:55.056714 1102226 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0318 13:45:55.056724 1102226 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0318 13:45:55.056737 1102226 command_runner.go:130] > # separated by comma.
	I0318 13:45:55.056753 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056763 1102226 command_runner.go:130] > # gid_mappings = ""
	I0318 13:45:55.056776 1102226 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0318 13:45:55.056788 1102226 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:45:55.056800 1102226 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:45:55.056822 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056831 1102226 command_runner.go:130] > # minimum_mappable_uid = -1
	I0318 13:45:55.056845 1102226 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0318 13:45:55.056858 1102226 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:45:55.056870 1102226 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:45:55.056885 1102226 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:45:55.056895 1102226 command_runner.go:130] > # minimum_mappable_gid = -1
	I0318 13:45:55.056906 1102226 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0318 13:45:55.056917 1102226 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0318 13:45:55.056930 1102226 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0318 13:45:55.056947 1102226 command_runner.go:130] > # ctr_stop_timeout = 30
	I0318 13:45:55.056961 1102226 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0318 13:45:55.056974 1102226 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0318 13:45:55.056984 1102226 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0318 13:45:55.056995 1102226 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0318 13:45:55.057004 1102226 command_runner.go:130] > drop_infra_ctr = false
	I0318 13:45:55.057014 1102226 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0318 13:45:55.057024 1102226 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0318 13:45:55.057039 1102226 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0318 13:45:55.057054 1102226 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0318 13:45:55.057065 1102226 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0318 13:45:55.057077 1102226 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0318 13:45:55.057090 1102226 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0318 13:45:55.057101 1102226 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0318 13:45:55.057111 1102226 command_runner.go:130] > # shared_cpuset = ""
	I0318 13:45:55.057122 1102226 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0318 13:45:55.057129 1102226 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0318 13:45:55.057136 1102226 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0318 13:45:55.057151 1102226 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0318 13:45:55.057161 1102226 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0318 13:45:55.057173 1102226 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0318 13:45:55.057185 1102226 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0318 13:45:55.057195 1102226 command_runner.go:130] > # enable_criu_support = false
	I0318 13:45:55.057206 1102226 command_runner.go:130] > # Enable/disable the generation of the container,
	I0318 13:45:55.057214 1102226 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0318 13:45:55.057223 1102226 command_runner.go:130] > # enable_pod_events = false
	I0318 13:45:55.057237 1102226 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:45:55.057251 1102226 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:45:55.057262 1102226 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0318 13:45:55.057272 1102226 command_runner.go:130] > # default_runtime = "runc"
	I0318 13:45:55.057283 1102226 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0318 13:45:55.057296 1102226 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0318 13:45:55.057310 1102226 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0318 13:45:55.057322 1102226 command_runner.go:130] > # creation as a file is not desired either.
	I0318 13:45:55.057337 1102226 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0318 13:45:55.057348 1102226 command_runner.go:130] > # the hostname is being managed dynamically.
	I0318 13:45:55.057366 1102226 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0318 13:45:55.057375 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.057385 1102226 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0318 13:45:55.057397 1102226 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0318 13:45:55.057409 1102226 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0318 13:45:55.057421 1102226 command_runner.go:130] > # Each entry in the table should follow the format:
	I0318 13:45:55.057430 1102226 command_runner.go:130] > #
	I0318 13:45:55.057440 1102226 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0318 13:45:55.057450 1102226 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0318 13:45:55.057460 1102226 command_runner.go:130] > # runtime_type = "oci"
	I0318 13:45:55.057545 1102226 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0318 13:45:55.057560 1102226 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0318 13:45:55.057564 1102226 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0318 13:45:55.057568 1102226 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0318 13:45:55.057575 1102226 command_runner.go:130] > # monitor_env = []
	I0318 13:45:55.057586 1102226 command_runner.go:130] > # privileged_without_host_devices = false
	I0318 13:45:55.057596 1102226 command_runner.go:130] > # allowed_annotations = []
	I0318 13:45:55.057608 1102226 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0318 13:45:55.057617 1102226 command_runner.go:130] > # Where:
	I0318 13:45:55.057629 1102226 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0318 13:45:55.057641 1102226 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0318 13:45:55.057651 1102226 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0318 13:45:55.057660 1102226 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0318 13:45:55.057665 1102226 command_runner.go:130] > #   in $PATH.
	I0318 13:45:55.057679 1102226 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0318 13:45:55.057690 1102226 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0318 13:45:55.057702 1102226 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0318 13:45:55.057711 1102226 command_runner.go:130] > #   state.
	I0318 13:45:55.057724 1102226 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0318 13:45:55.057743 1102226 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0318 13:45:55.057753 1102226 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0318 13:45:55.057765 1102226 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0318 13:45:55.057779 1102226 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0318 13:45:55.057793 1102226 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0318 13:45:55.057804 1102226 command_runner.go:130] > #   The currently recognized values are:
	I0318 13:45:55.057818 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0318 13:45:55.057838 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0318 13:45:55.057848 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0318 13:45:55.057861 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0318 13:45:55.057876 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0318 13:45:55.057890 1102226 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0318 13:45:55.057904 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0318 13:45:55.057917 1102226 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0318 13:45:55.057929 1102226 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0318 13:45:55.057938 1102226 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0318 13:45:55.057947 1102226 command_runner.go:130] > #   deprecated option "conmon".
	I0318 13:45:55.057963 1102226 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0318 13:45:55.057974 1102226 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0318 13:45:55.057988 1102226 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0318 13:45:55.057999 1102226 command_runner.go:130] > #   should be moved to the container's cgroup
	I0318 13:45:55.058013 1102226 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0318 13:45:55.058021 1102226 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0318 13:45:55.058032 1102226 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0318 13:45:55.058049 1102226 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0318 13:45:55.058058 1102226 command_runner.go:130] > #
	I0318 13:45:55.058066 1102226 command_runner.go:130] > # Using the seccomp notifier feature:
	I0318 13:45:55.058075 1102226 command_runner.go:130] > #
	I0318 13:45:55.058091 1102226 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0318 13:45:55.058104 1102226 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0318 13:45:55.058112 1102226 command_runner.go:130] > #
	I0318 13:45:55.058125 1102226 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0318 13:45:55.058134 1102226 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0318 13:45:55.058142 1102226 command_runner.go:130] > #
	I0318 13:45:55.058156 1102226 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0318 13:45:55.058165 1102226 command_runner.go:130] > # feature.
	I0318 13:45:55.058173 1102226 command_runner.go:130] > #
	I0318 13:45:55.058182 1102226 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0318 13:45:55.058195 1102226 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0318 13:45:55.058208 1102226 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0318 13:45:55.058217 1102226 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0318 13:45:55.058229 1102226 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0318 13:45:55.058238 1102226 command_runner.go:130] > #
	I0318 13:45:55.058258 1102226 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0318 13:45:55.058271 1102226 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0318 13:45:55.058279 1102226 command_runner.go:130] > #
	I0318 13:45:55.058289 1102226 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0318 13:45:55.058300 1102226 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0318 13:45:55.058306 1102226 command_runner.go:130] > #
	I0318 13:45:55.058315 1102226 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0318 13:45:55.058327 1102226 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0318 13:45:55.058336 1102226 command_runner.go:130] > # limitation.
	I0318 13:45:55.058346 1102226 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0318 13:45:55.058357 1102226 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0318 13:45:55.058366 1102226 command_runner.go:130] > runtime_type = "oci"
	I0318 13:45:55.058376 1102226 command_runner.go:130] > runtime_root = "/run/runc"
	I0318 13:45:55.058386 1102226 command_runner.go:130] > runtime_config_path = ""
	I0318 13:45:55.058394 1102226 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0318 13:45:55.058399 1102226 command_runner.go:130] > monitor_cgroup = "pod"
	I0318 13:45:55.058409 1102226 command_runner.go:130] > monitor_exec_cgroup = ""
	I0318 13:45:55.058418 1102226 command_runner.go:130] > monitor_env = [
	I0318 13:45:55.058428 1102226 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:45:55.058436 1102226 command_runner.go:130] > ]
	I0318 13:45:55.058447 1102226 command_runner.go:130] > privileged_without_host_devices = false
	I0318 13:45:55.058465 1102226 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0318 13:45:55.058476 1102226 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0318 13:45:55.058487 1102226 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0318 13:45:55.058498 1102226 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0318 13:45:55.058514 1102226 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0318 13:45:55.058528 1102226 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0318 13:45:55.058547 1102226 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0318 13:45:55.058563 1102226 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0318 13:45:55.058575 1102226 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0318 13:45:55.058588 1102226 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0318 13:45:55.058595 1102226 command_runner.go:130] > # Example:
	I0318 13:45:55.058600 1102226 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0318 13:45:55.058610 1102226 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0318 13:45:55.058622 1102226 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0318 13:45:55.058634 1102226 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0318 13:45:55.058649 1102226 command_runner.go:130] > # cpuset = 0
	I0318 13:45:55.058655 1102226 command_runner.go:130] > # cpushares = "0-1"
	I0318 13:45:55.058661 1102226 command_runner.go:130] > # Where:
	I0318 13:45:55.058668 1102226 command_runner.go:130] > # The workload name is workload-type.
	I0318 13:45:55.058679 1102226 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0318 13:45:55.058687 1102226 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0318 13:45:55.058692 1102226 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0318 13:45:55.058701 1102226 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0318 13:45:55.058710 1102226 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0318 13:45:55.058719 1102226 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0318 13:45:55.058729 1102226 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0318 13:45:55.058736 1102226 command_runner.go:130] > # Default value is set to true
	I0318 13:45:55.058743 1102226 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0318 13:45:55.058751 1102226 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0318 13:45:55.058759 1102226 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0318 13:45:55.058766 1102226 command_runner.go:130] > # Default value is set to 'false'
	I0318 13:45:55.058772 1102226 command_runner.go:130] > # disable_hostport_mapping = false
	I0318 13:45:55.058780 1102226 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0318 13:45:55.058783 1102226 command_runner.go:130] > #
	I0318 13:45:55.058791 1102226 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0318 13:45:55.058800 1102226 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0318 13:45:55.058810 1102226 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0318 13:45:55.058820 1102226 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0318 13:45:55.058830 1102226 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0318 13:45:55.058835 1102226 command_runner.go:130] > [crio.image]
	I0318 13:45:55.058844 1102226 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0318 13:45:55.058854 1102226 command_runner.go:130] > # default_transport = "docker://"
	I0318 13:45:55.058866 1102226 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0318 13:45:55.058875 1102226 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:45:55.058886 1102226 command_runner.go:130] > # global_auth_file = ""
	I0318 13:45:55.058898 1102226 command_runner.go:130] > # The image used to instantiate infra containers.
	I0318 13:45:55.058910 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.058926 1102226 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0318 13:45:55.058940 1102226 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0318 13:45:55.058953 1102226 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:45:55.058963 1102226 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:45:55.058976 1102226 command_runner.go:130] > # pause_image_auth_file = ""
	I0318 13:45:55.058989 1102226 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0318 13:45:55.059002 1102226 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0318 13:45:55.059015 1102226 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0318 13:45:55.059028 1102226 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0318 13:45:55.059038 1102226 command_runner.go:130] > # pause_command = "/pause"
	I0318 13:45:55.059054 1102226 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0318 13:45:55.059066 1102226 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0318 13:45:55.059074 1102226 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0318 13:45:55.059086 1102226 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0318 13:45:55.059099 1102226 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0318 13:45:55.059121 1102226 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0318 13:45:55.059131 1102226 command_runner.go:130] > # pinned_images = [
	I0318 13:45:55.059140 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059152 1102226 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0318 13:45:55.059164 1102226 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0318 13:45:55.059174 1102226 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0318 13:45:55.059186 1102226 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0318 13:45:55.059198 1102226 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0318 13:45:55.059208 1102226 command_runner.go:130] > # signature_policy = ""
	I0318 13:45:55.059220 1102226 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0318 13:45:55.059233 1102226 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0318 13:45:55.059246 1102226 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0318 13:45:55.059258 1102226 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0318 13:45:55.059268 1102226 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0318 13:45:55.059277 1102226 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0318 13:45:55.059289 1102226 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0318 13:45:55.059303 1102226 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0318 13:45:55.059313 1102226 command_runner.go:130] > # changing them here.
	I0318 13:45:55.059323 1102226 command_runner.go:130] > # insecure_registries = [
	I0318 13:45:55.059330 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059341 1102226 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0318 13:45:55.059352 1102226 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0318 13:45:55.059361 1102226 command_runner.go:130] > # image_volumes = "mkdir"
	I0318 13:45:55.059370 1102226 command_runner.go:130] > # Temporary directory to use for storing big files
	I0318 13:45:55.059375 1102226 command_runner.go:130] > # big_files_temporary_dir = ""
	I0318 13:45:55.059395 1102226 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0318 13:45:55.059405 1102226 command_runner.go:130] > # CNI plugins.
	I0318 13:45:55.059410 1102226 command_runner.go:130] > [crio.network]
	I0318 13:45:55.059423 1102226 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0318 13:45:55.059435 1102226 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0318 13:45:55.059445 1102226 command_runner.go:130] > # cni_default_network = ""
	I0318 13:45:55.059457 1102226 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0318 13:45:55.059468 1102226 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0318 13:45:55.059477 1102226 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0318 13:45:55.059484 1102226 command_runner.go:130] > # plugin_dirs = [
	I0318 13:45:55.059494 1102226 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0318 13:45:55.059503 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059512 1102226 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0318 13:45:55.059521 1102226 command_runner.go:130] > [crio.metrics]
	I0318 13:45:55.059532 1102226 command_runner.go:130] > # Globally enable or disable metrics support.
	I0318 13:45:55.059541 1102226 command_runner.go:130] > enable_metrics = true
	I0318 13:45:55.059551 1102226 command_runner.go:130] > # Specify enabled metrics collectors.
	I0318 13:45:55.059562 1102226 command_runner.go:130] > # Per default all metrics are enabled.
	I0318 13:45:55.059576 1102226 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0318 13:45:55.059588 1102226 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0318 13:45:55.059601 1102226 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0318 13:45:55.059611 1102226 command_runner.go:130] > # metrics_collectors = [
	I0318 13:45:55.059621 1102226 command_runner.go:130] > # 	"operations",
	I0318 13:45:55.059631 1102226 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0318 13:45:55.059641 1102226 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0318 13:45:55.059651 1102226 command_runner.go:130] > # 	"operations_errors",
	I0318 13:45:55.059657 1102226 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0318 13:45:55.059664 1102226 command_runner.go:130] > # 	"image_pulls_by_name",
	I0318 13:45:55.059670 1102226 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0318 13:45:55.059679 1102226 command_runner.go:130] > # 	"image_pulls_failures",
	I0318 13:45:55.059690 1102226 command_runner.go:130] > # 	"image_pulls_successes",
	I0318 13:45:55.059697 1102226 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0318 13:45:55.059707 1102226 command_runner.go:130] > # 	"image_layer_reuse",
	I0318 13:45:55.059718 1102226 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0318 13:45:55.059728 1102226 command_runner.go:130] > # 	"containers_oom_total",
	I0318 13:45:55.059737 1102226 command_runner.go:130] > # 	"containers_oom",
	I0318 13:45:55.059752 1102226 command_runner.go:130] > # 	"processes_defunct",
	I0318 13:45:55.059760 1102226 command_runner.go:130] > # 	"operations_total",
	I0318 13:45:55.059765 1102226 command_runner.go:130] > # 	"operations_latency_seconds",
	I0318 13:45:55.059775 1102226 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0318 13:45:55.059785 1102226 command_runner.go:130] > # 	"operations_errors_total",
	I0318 13:45:55.059796 1102226 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0318 13:45:55.059806 1102226 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0318 13:45:55.059816 1102226 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0318 13:45:55.059838 1102226 command_runner.go:130] > # 	"image_pulls_success_total",
	I0318 13:45:55.059849 1102226 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0318 13:45:55.059857 1102226 command_runner.go:130] > # 	"containers_oom_count_total",
	I0318 13:45:55.059868 1102226 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0318 13:45:55.059879 1102226 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0318 13:45:55.059887 1102226 command_runner.go:130] > # ]
	I0318 13:45:55.059899 1102226 command_runner.go:130] > # The port on which the metrics server will listen.
	I0318 13:45:55.059909 1102226 command_runner.go:130] > # metrics_port = 9090
	I0318 13:45:55.059919 1102226 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0318 13:45:55.059926 1102226 command_runner.go:130] > # metrics_socket = ""
	I0318 13:45:55.059934 1102226 command_runner.go:130] > # The certificate for the secure metrics server.
	I0318 13:45:55.059947 1102226 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0318 13:45:55.059960 1102226 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0318 13:45:55.059970 1102226 command_runner.go:130] > # certificate on any modification event.
	I0318 13:45:55.059980 1102226 command_runner.go:130] > # metrics_cert = ""
	I0318 13:45:55.059992 1102226 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0318 13:45:55.060003 1102226 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0318 13:45:55.060013 1102226 command_runner.go:130] > # metrics_key = ""
	I0318 13:45:55.060024 1102226 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0318 13:45:55.060030 1102226 command_runner.go:130] > [crio.tracing]
	I0318 13:45:55.060038 1102226 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0318 13:45:55.060052 1102226 command_runner.go:130] > # enable_tracing = false
	I0318 13:45:55.060064 1102226 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0318 13:45:55.060074 1102226 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0318 13:45:55.060087 1102226 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0318 13:45:55.060098 1102226 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0318 13:45:55.060108 1102226 command_runner.go:130] > # CRI-O NRI configuration.
	I0318 13:45:55.060115 1102226 command_runner.go:130] > [crio.nri]
	I0318 13:45:55.060126 1102226 command_runner.go:130] > # Globally enable or disable NRI.
	I0318 13:45:55.060136 1102226 command_runner.go:130] > # enable_nri = false
	I0318 13:45:55.060146 1102226 command_runner.go:130] > # NRI socket to listen on.
	I0318 13:45:55.060154 1102226 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0318 13:45:55.060164 1102226 command_runner.go:130] > # NRI plugin directory to use.
	I0318 13:45:55.060174 1102226 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0318 13:45:55.060185 1102226 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0318 13:45:55.060195 1102226 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0318 13:45:55.060207 1102226 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0318 13:45:55.060215 1102226 command_runner.go:130] > # nri_disable_connections = false
	I0318 13:45:55.060224 1102226 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0318 13:45:55.060233 1102226 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0318 13:45:55.060245 1102226 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0318 13:45:55.060256 1102226 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0318 13:45:55.060269 1102226 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0318 13:45:55.060277 1102226 command_runner.go:130] > [crio.stats]
	I0318 13:45:55.060289 1102226 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0318 13:45:55.060301 1102226 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0318 13:45:55.060310 1102226 command_runner.go:130] > # stats_collection_period = 0
	I0318 13:45:55.060358 1102226 command_runner.go:130] ! time="2024-03-18 13:45:55.015802229Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0318 13:45:55.060385 1102226 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0318 13:45:55.060621 1102226 cni.go:84] Creating CNI manager for ""
	I0318 13:45:55.060638 1102226 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:45:55.060650 1102226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:45:55.060691 1102226 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-994669 NodeName:multinode-994669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:45:55.060881 1102226 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-994669"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:45:55.060967 1102226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:45:55.073277 1102226 command_runner.go:130] > kubeadm
	I0318 13:45:55.073293 1102226 command_runner.go:130] > kubectl
	I0318 13:45:55.073297 1102226 command_runner.go:130] > kubelet
	I0318 13:45:55.073534 1102226 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:45:55.073587 1102226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:45:55.085522 1102226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0318 13:45:55.104791 1102226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:45:55.124901 1102226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0318 13:45:55.145130 1102226 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0318 13:45:55.149564 1102226 command_runner.go:130] > 192.168.39.57	control-plane.minikube.internal
	I0318 13:45:55.149638 1102226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:45:55.310684 1102226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:45:55.327467 1102226 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669 for IP: 192.168.39.57
	I0318 13:45:55.327503 1102226 certs.go:194] generating shared ca certs ...
	I0318 13:45:55.327523 1102226 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:45:55.327753 1102226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 13:45:55.327837 1102226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 13:45:55.327854 1102226 certs.go:256] generating profile certs ...
	I0318 13:45:55.327968 1102226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/client.key
	I0318 13:45:55.328059 1102226 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.key.de4d6102
	I0318 13:45:55.328116 1102226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.key
	I0318 13:45:55.328132 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:45:55.328150 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:45:55.328167 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:45:55.328188 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:45:55.328203 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:45:55.328221 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:45:55.328239 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:45:55.328261 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:45:55.328347 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 13:45:55.328391 1102226 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 13:45:55.328404 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 13:45:55.328434 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:45:55.328470 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:45:55.328502 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 13:45:55.328556 1102226 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 13:45:55.328598 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.328617 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem -> /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.328635 1102226 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.329364 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:45:55.354116 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:45:55.379063 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:45:55.403409 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:45:55.427939 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:45:55.452186 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:45:55.478163 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:45:55.505532 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/multinode-994669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:45:55.532977 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:45:55.559046 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 13:45:55.584703 1102226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 13:45:55.610595 1102226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:45:55.629599 1102226 ssh_runner.go:195] Run: openssl version
	I0318 13:45:55.636543 1102226 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:45:55.636692 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:45:55.649808 1102226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.655139 1102226 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.655288 1102226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.655333 1102226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:45:55.661103 1102226 command_runner.go:130] > b5213941
	I0318 13:45:55.661328 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:45:55.671900 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 13:45:55.683965 1102226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.688768 1102226 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.688795 1102226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.688842 1102226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 13:45:55.694518 1102226 command_runner.go:130] > 51391683
	I0318 13:45:55.694674 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 13:45:55.704739 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 13:45:55.716231 1102226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.720627 1102226 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.720835 1102226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.720883 1102226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 13:45:55.726691 1102226 command_runner.go:130] > 3ec20f2e
	I0318 13:45:55.726736 1102226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:45:55.736679 1102226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:45:55.741035 1102226 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:45:55.741055 1102226 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 13:45:55.741061 1102226 command_runner.go:130] > Device: 253,1	Inode: 8385597     Links: 1
	I0318 13:45:55.741067 1102226 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:45:55.741076 1102226 command_runner.go:130] > Access: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741081 1102226 command_runner.go:130] > Modify: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741092 1102226 command_runner.go:130] > Change: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741098 1102226 command_runner.go:130] >  Birth: 2024-03-18 13:39:34.195291904 +0000
	I0318 13:45:55.741234 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:45:55.746640 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.746818 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:45:55.752182 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.752360 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:45:55.758028 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.758100 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:45:55.763617 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.763687 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:45:55.769359 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.769420 1102226 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:45:55.774888 1102226 command_runner.go:130] > Certificate will not expire
	I0318 13:45:55.774962 1102226 kubeadm.go:391] StartCluster: {Name:multinode-994669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-994669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.187 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:55.775126 1102226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:45:55.775179 1102226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:45:55.810558 1102226 command_runner.go:130] > 6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408
	I0318 13:45:55.810591 1102226 command_runner.go:130] > eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb
	I0318 13:45:55.810601 1102226 command_runner.go:130] > 09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707
	I0318 13:45:55.810622 1102226 command_runner.go:130] > 7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e
	I0318 13:45:55.810642 1102226 command_runner.go:130] > 188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558
	I0318 13:45:55.810651 1102226 command_runner.go:130] > b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b
	I0318 13:45:55.810660 1102226 command_runner.go:130] > bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6
	I0318 13:45:55.810679 1102226 command_runner.go:130] > e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d
	I0318 13:45:55.812089 1102226 cri.go:89] found id: "6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408"
	I0318 13:45:55.812121 1102226 cri.go:89] found id: "eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb"
	I0318 13:45:55.812127 1102226 cri.go:89] found id: "09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707"
	I0318 13:45:55.812132 1102226 cri.go:89] found id: "7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e"
	I0318 13:45:55.812136 1102226 cri.go:89] found id: "188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558"
	I0318 13:45:55.812140 1102226 cri.go:89] found id: "b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b"
	I0318 13:45:55.812144 1102226 cri.go:89] found id: "bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6"
	I0318 13:45:55.812150 1102226 cri.go:89] found id: "e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d"
	I0318 13:45:55.812154 1102226 cri.go:89] found id: ""
	I0318 13:45:55.812217 1102226 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.563350068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769788563318349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=215bbbfd-62e0-4dd4-a86f-ca6371eb5227 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.563968085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f474e56-ea9c-4169-a375-437123c78bd6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.564180755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f474e56-ea9c-4169-a375-437123c78bd6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.564848038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f474e56-ea9c-4169-a375-437123c78bd6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.611468810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da45f4ef-0e90-4979-81d4-b13da9826707 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.611549932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da45f4ef-0e90-4979-81d4-b13da9826707 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.612702389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c456ebbd-1b4d-4fb3-8473-d765ed7aaee2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.613525633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769788613112616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c456ebbd-1b4d-4fb3-8473-d765ed7aaee2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.614171668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d83f5e77-2bd7-403b-9c31-3d0cccabd8de name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.614432030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d83f5e77-2bd7-403b-9c31-3d0cccabd8de name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.614771616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d83f5e77-2bd7-403b-9c31-3d0cccabd8de name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.660706677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4224760-1ac5-4920-935b-9d1cbed77114 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.660783982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4224760-1ac5-4920-935b-9d1cbed77114 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.661709468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ed165d1-502b-4b06-989a-651c455f7047 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.662569785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769788662543884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ed165d1-502b-4b06-989a-651c455f7047 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.663060165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=088f5990-e351-4a44-a5a9-4478f0548845 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.663114241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=088f5990-e351-4a44-a5a9-4478f0548845 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.663589094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=088f5990-e351-4a44-a5a9-4478f0548845 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.714397385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e5ad86b-723d-45e6-94a6-7f773bd44271 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.714477462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e5ad86b-723d-45e6-94a6-7f773bd44271 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.716668683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3baedded-ee8c-47bd-bc53-888ad031e80d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.717109454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769788717082480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3baedded-ee8c-47bd-bc53-888ad031e80d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.718085656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61891e1a-79e2-4375-b648-3d0f2454293b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.718341940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61891e1a-79e2-4375-b648-3d0f2454293b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:49:48 multinode-994669 crio[2839]: time="2024-03-18 13:49:48.718773412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab6fb9f215547503c2f5026fe22d54bb0c4973b8da58f09d45a1389e20d5beb8,PodSandboxId:4d786ff152ee96f98628927b79ef1fb4c65bb1b1c31ed4412e56de71beac9936,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710769596787353012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01,PodSandboxId:8b7f48024f09f5af5609cbe1da9acf581fb5cd8df6d4d0ce240253bf3fb64dde,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710769563368968629,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066,PodSandboxId:c1e7ddd456a9d70668f8620d2cbe98ca52880a69e76b17d0e186b3e29f5d3099,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769563274525133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064,PodSandboxId:f85e38707a49596f5508100ce59294a85bdae8ba1a809f6286a37b5e4104bac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769563145380693,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bc8f0297b5c140dbf3c441ad40a2d84c21ab3adca8e70c50396c355cc9b76b,PodSandboxId:8fa25b4cfb48859f17f9f19df1cd3a822305a103bb2239ba34cea49e4296d1f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710769563087819999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5,PodSandboxId:06dfaaab006978e37fc5f4f741f88d330c31ab5b278d11b3a9608eea9db879d4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769558449140703,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66,PodSandboxId:cae37a16cc40c5f40fd508920e72f1313262fb0b494185517fafb16ac9d245e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769558457072020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51
fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c,PodSandboxId:d8d812127351c9f156bc928093597705776f72968246de718a56eea6f37b9617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769558442390753,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c,PodSandboxId:cd5fc15879c5c9a6c34bd2472886615ac9d44235b7abb37ca81463e38677ccc1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769558367655385,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:map[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31864a374a74e22274f4257c17fe0e3bcd6bc701852676f9398db6b30e11ff,PodSandboxId:15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710769247878517313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4nbjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d344d9f-d488-4d70-8e7a-bfbd1f4724b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7d245ba5,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408,PodSandboxId:fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769203179139272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmwvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94537a54-a7ff-4e1f-bf71-43d66bc78138,},Annotations:map[string]string{io.kubernetes.container.hash: 5e787d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaeb32898d7b98d173c9a23aba80a5a54af78247b682ab417a23c813109b5ebb,PodSandboxId:5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769203152166634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 48b3e25f-c978-46aa-b8d5-d40371519a5e,},Annotations:map[string]string{io.kubernetes.container.hash: 25cf0613,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707,PodSandboxId:0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710769201447332193,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m8hth,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9b18f931-1481-4999-9ff1-89fc4a11f2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 18ac1f30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e,PodSandboxId:1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769198148976610,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f9tgg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: d46ff588-70f2-4b72-8951-c1d1518d7bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 9507cd60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558,PodSandboxId:44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769177967487073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e47bdd6c27ccb21f6946ff8943791b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b,PodSandboxId:8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769177918071729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: d8d8d68bba4c5da05ae4e5388cfe771f,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6,PodSandboxId:503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769177881295276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a947eabccfd6fe8f857f455d2bd38fd0,
},Annotations:map[string]string{io.kubernetes.container.hash: c9452c81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d,PodSandboxId:2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769177842505994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-994669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87d45fc5ff8300974beb759dc4755c67,},Annotations:m
ap[string]string{io.kubernetes.container.hash: a36a3292,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61891e1a-79e2-4375-b648-3d0f2454293b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab6fb9f215547       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   4d786ff152ee9       busybox-5b5d89c9d6-4nbjw
	e90e247cd8d81       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   8b7f48024f09f       kindnet-m8hth
	88e4c078a3782       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   c1e7ddd456a9d       coredns-5dd5756b68-pmwvq
	eec85d9c33948       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   f85e38707a495       kube-proxy-f9tgg
	d9bc8f0297b5c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   8fa25b4cfb488       storage-provisioner
	75c75f022d2d7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   cae37a16cc40c       kube-controller-manager-multinode-994669
	8cf3cc2ad6df4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   06dfaaab00697       etcd-multinode-994669
	be6e039078050       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   d8d812127351c       kube-scheduler-multinode-994669
	3cd5ec43f35fe       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   cd5fc15879c5c       kube-apiserver-multinode-994669
	1e31864a374a7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   15ba741fc0318       busybox-5b5d89c9d6-4nbjw
	6d25b416eebed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   fbf2592d41d34       coredns-5dd5756b68-pmwvq
	eaeb32898d7b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   5f2cd94162ae3       storage-provisioner
	09589b564e838       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   0139aab85c748       kindnet-m8hth
	7affd38bc5a22       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   1f23428e45300       kube-proxy-f9tgg
	188be02cea85b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   44104ccc2e985       kube-scheduler-multinode-994669
	b957b85972f36       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   8147e55b30143       kube-controller-manager-multinode-994669
	bcc52f68fa634       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   503d0d94bd1d9       etcd-multinode-994669
	e04f6e0a268ab       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   2623dcbffb530       kube-apiserver-multinode-994669
	
	
	==> coredns [6d25b416eebed3141b6412a2b303b65f7dcd09570dd9ddefb5cf31260d415408] <==
	[INFO] 10.244.1.2:57330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001965197s
	[INFO] 10.244.1.2:39970 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103391s
	[INFO] 10.244.1.2:57592 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087847s
	[INFO] 10.244.1.2:33063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002370487s
	[INFO] 10.244.1.2:52477 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077909s
	[INFO] 10.244.1.2:52575 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097759s
	[INFO] 10.244.1.2:35955 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124998s
	[INFO] 10.244.0.3:58000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168623s
	[INFO] 10.244.0.3:59283 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102799s
	[INFO] 10.244.0.3:46987 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050733s
	[INFO] 10.244.0.3:48885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059192s
	[INFO] 10.244.1.2:54373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001524s
	[INFO] 10.244.1.2:55816 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109549s
	[INFO] 10.244.1.2:59478 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135038s
	[INFO] 10.244.1.2:39606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009545s
	[INFO] 10.244.0.3:46814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182754s
	[INFO] 10.244.0.3:33907 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196752s
	[INFO] 10.244.0.3:59330 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102667s
	[INFO] 10.244.0.3:45408 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000149185s
	[INFO] 10.244.1.2:60952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153722s
	[INFO] 10.244.1.2:47659 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197442s
	[INFO] 10.244.1.2:40958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086161s
	[INFO] 10.244.1.2:46663 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000242883s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [88e4c078a37823aab6ae8266e6f2a571e739a728c23b1ffb12c6c0666cd4f066] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46426 - 6188 "HINFO IN 2855692911652182914.5550051341023083747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02070556s
	
	
	==> describe nodes <==
	Name:               multinode-994669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-994669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=multinode-994669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_39_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:39:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994669
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:49:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:39:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:39:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:39:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:46:02 +0000   Mon, 18 Mar 2024 13:40:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    multinode-994669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fb855a985a54fadb8eaa3b3c7fe3c0e
	  System UUID:                1fb855a9-85a5-4fad-b8ea-a3b3c7fe3c0e
	  Boot ID:                    32998e24-00c7-44a4-a7bd-183e2c2fc329
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4nbjw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	  kube-system                 coredns-5dd5756b68-pmwvq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m53s
	  kube-system                 etcd-multinode-994669                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-m8hth                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m53s
	  kube-system                 kube-apiserver-multinode-994669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-994669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-f9tgg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-scheduler-multinode-994669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m50s                  kube-proxy       
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-994669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-994669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-994669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m53s                  node-controller  Node multinode-994669 event: Registered Node multinode-994669 in Controller
	  Normal  NodeReady                9m47s                  kubelet          Node multinode-994669 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node multinode-994669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node multinode-994669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node multinode-994669 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s                  node-controller  Node multinode-994669 event: Registered Node multinode-994669 in Controller
	
	
	Name:               multinode-994669-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-994669-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=multinode-994669
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_46_45_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:46:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994669-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:47:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:48:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:48:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:48:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 13:47:15 +0000   Mon, 18 Mar 2024 13:48:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    multinode-994669-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 24131595995f41d78663efe9f2f8d32a
	  System UUID:                24131595-995f-41d7-8663-efe9f2f8d32a
	  Boot ID:                    456a4975-371e-4640-a34e-bca32d17d85a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ngqq9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-zhkmw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m14s
	  kube-system                 kube-proxy-pxm42            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m15s (x5 over 9m16s)  kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s (x5 over 9m16s)  kubelet          Node multinode-994669-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x5 over 9m16s)  kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m6s                   kubelet          Node multinode-994669-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x5 over 3m6s)    kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x5 over 3m6s)    kubelet          Node multinode-994669-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x5 over 3m6s)    kubelet          Node multinode-994669-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                     node-controller  Node multinode-994669-m02 event: Registered Node multinode-994669-m02 in Controller
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-994669-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-994669-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.171824] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.155479] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.230989] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.756949] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.057232] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.607094] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.406972] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.362286] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.086455] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.316199] systemd-fstab-generator[1459]: Ignoring "noauto" option for root device
	[  +0.088333] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 13:40] kauditd_printk_skb: 56 callbacks suppressed
	[ +44.142660] kauditd_printk_skb: 18 callbacks suppressed
	[Mar18 13:45] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +0.157015] systemd-fstab-generator[2775]: Ignoring "noauto" option for root device
	[  +0.169579] systemd-fstab-generator[2789]: Ignoring "noauto" option for root device
	[  +0.144400] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.258795] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +3.125567] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[  +2.048398] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[  +0.080694] kauditd_printk_skb: 122 callbacks suppressed
	[Mar18 13:46] kauditd_printk_skb: 52 callbacks suppressed
	[ +12.115498] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.786216] systemd-fstab-generator[3868]: Ignoring "noauto" option for root device
	[ +18.854840] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [8cf3cc2ad6df43671e836e0afec4c1fd92ce04a56a4fbb01030ab24f249ee6e5] <==
	{"level":"info","ts":"2024-03-18T13:45:58.854995Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:45:58.855022Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:45:58.855421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d switched to configuration voters=(8786012295892039485)"}
	{"level":"info","ts":"2024-03-18T13:45:58.85552Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","added-peer-id":"79ee2fa200dbf73d","added-peer-peer-urls":["https://192.168.39.57:2380"]}
	{"level":"info","ts":"2024-03-18T13:45:58.855655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:45:58.8557Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:45:58.868895Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:45:58.871321Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"79ee2fa200dbf73d","initial-advertise-peer-urls":["https://192.168.39.57:2380"],"listen-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.57:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:45:58.87141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:45:58.871606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:45:58.871637Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:46:00.626265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:46:00.626385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:46:00.626462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgPreVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-03-18T13:46:00.626497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.626522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.626548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.626574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-03-18T13:46:00.633104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:46:00.633049Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:multinode-994669 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:46:00.634641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:46:00.634876Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-03-18T13:46:00.635866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:46:00.635978Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:46:00.639134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [bcc52f68fa634ce7384b993e474d31d4e780ffd5e4f94ecdf67e338e0c6d01e6] <==
	{"level":"info","ts":"2024-03-18T13:39:38.598398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-03-18T13:39:38.598406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 2"}
	{"level":"info","ts":"2024-03-18T13:39:38.598413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-03-18T13:39:38.604401Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.606513Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:multinode-994669 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:39:38.606785Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:39:38.627603Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:39:38.631293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:39:38.631332Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.6384Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.638458Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:39:38.651353Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:39:38.651474Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:39:38.654962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-03-18T13:40:37.633285Z","caller":"traceutil/trace.go:171","msg":"trace[438097044] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"108.592372ms","start":"2024-03-18T13:40:37.524587Z","end":"2024-03-18T13:40:37.633179Z","steps":["trace[438097044] 'process raft request'  (duration: 108.095316ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:44:20.110281Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T13:44:20.110507Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-994669","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	{"level":"warn","ts":"2024-03-18T13:44:20.11071Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:44:20.110844Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:44:20.146512Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:44:20.146569Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:44:20.148058Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"79ee2fa200dbf73d"}
	{"level":"info","ts":"2024-03-18T13:44:20.151423Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:44:20.151582Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-03-18T13:44:20.15162Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-994669","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> kernel <==
	 13:49:49 up 10 min,  0 users,  load average: 0.22, 0.22, 0.12
	Linux multinode-994669 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [09589b564e838693cb1f9283109e7e5bb71def7c3da7b1d128478b38c4dc2707] <==
	I0318 13:43:32.538585       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:43:42.551002       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:43:42.551115       1 main.go:227] handling current node
	I0318 13:43:42.551149       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:43:42.551167       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:43:42.551367       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:43:42.551402       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:43:52.556263       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:43:52.556498       1 main.go:227] handling current node
	I0318 13:43:52.556539       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:43:52.556560       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:43:52.556725       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:43:52.556746       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:44:02.569989       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:44:02.570165       1 main.go:227] handling current node
	I0318 13:44:02.570198       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:44:02.570328       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:44:02.570529       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:44:02.570582       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	I0318 13:44:12.582561       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:44:12.582894       1 main.go:227] handling current node
	I0318 13:44:12.582960       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:44:12.582992       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:44:12.583274       1 main.go:223] Handling node with IPs: map[192.168.39.187:{}]
	I0318 13:44:12.583325       1 main.go:250] Node multinode-994669-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e90e247cd8d814108c23bf2daec6e0ffcd336ff1f6f886604898aab8e57afb01] <==
	I0318 13:48:44.434654       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:48:54.446817       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:48:54.446964       1 main.go:227] handling current node
	I0318 13:48:54.446997       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:48:54.447016       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:49:04.452362       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:49:04.452413       1 main.go:227] handling current node
	I0318 13:49:04.452428       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:49:04.452435       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:49:14.461271       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:49:14.461378       1 main.go:227] handling current node
	I0318 13:49:14.461401       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:49:14.461418       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:49:24.469057       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:49:24.469588       1 main.go:227] handling current node
	I0318 13:49:24.469616       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:49:24.469637       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:49:34.475420       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:49:34.475513       1 main.go:227] handling current node
	I0318 13:49:34.475536       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:49:34.475553       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	I0318 13:49:44.480779       1 main.go:223] Handling node with IPs: map[192.168.39.57:{}]
	I0318 13:49:44.480865       1 main.go:227] handling current node
	I0318 13:49:44.480894       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0318 13:49:44.480913       1 main.go:250] Node multinode-994669-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3cd5ec43f35fe26c1e7e4c1991efb67256216e9db18a59f10c4e29d919b0612c] <==
	I0318 13:46:02.003881       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:46:02.009400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:46:02.009495       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:46:02.140755       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:46:02.193495       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:46:02.194525       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:46:02.194594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:46:02.200391       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:46:02.200910       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:46:02.201422       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:46:02.204352       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:46:02.204484       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:46:02.204521       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:46:02.204546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:46:02.204568       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:46:02.213580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0318 13:46:02.217333       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 13:46:03.001242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:46:04.896714       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:46:05.032953       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:46:05.044957       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:46:05.129924       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:46:05.141645       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:46:14.961781       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:46:15.057614       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e04f6e0a268ab876c002207c635bc5e171ea7082cfb1f19417e3dc8aefc5100d] <==
	I0318 13:44:20.139542       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0318 13:44:20.139438       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0318 13:44:20.139578       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0318 13:44:20.139608       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:44:20.140366       1 controller.go:162] Shutting down OpenAPI controller
	I0318 13:44:20.140437       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0318 13:44:20.139569       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0318 13:44:20.140377       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0318 13:44:20.139633       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0318 13:44:20.139640       1 establishing_controller.go:87] Shutting down EstablishingController
	I0318 13:44:20.140278       1 naming_controller.go:302] Shutting down NamingConditionController
	W0318 13:44:20.142108       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.142184       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.142941       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143018       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143054       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143088       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143124       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.143621       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0318 13:44:20.144360       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W0318 13:44:20.144701       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.144881       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.144961       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.145002       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:44:20.145067       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [75c75f022d2d792ec20eb35075b8c653c85a83232050122d69fdae7aba3beb66] <==
	I0318 13:46:45.629846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.321µs"
	I0318 13:46:49.915913       1 event.go:307] "Event occurred" object="multinode-994669-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-994669-m02 event: Registered Node multinode-994669-m02 in Controller"
	I0318 13:46:52.223055       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:46:52.243957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="45.415µs"
	I0318 13:46:52.261068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.514µs"
	I0318 13:46:54.777115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.031162ms"
	I0318 13:46:54.777535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="150.714µs"
	I0318 13:46:54.933017       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ngqq9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ngqq9"
	I0318 13:47:11.619052       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:47:14.347967       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:47:14.348533       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m03\" does not exist"
	I0318 13:47:14.361958       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:47:21.374308       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m03"
	I0318 13:47:27.226553       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:47:29.982334       1 event.go:307] "Event occurred" object="multinode-994669-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-994669-m03 event: Removing Node multinode-994669-m03 from Controller"
	I0318 13:48:10.002818       1 event.go:307] "Event occurred" object="multinode-994669-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-994669-m02 status is now: NodeNotReady"
	I0318 13:48:10.010822       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ngqq9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:48:10.025402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.249046ms"
	I0318 13:48:10.025514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.548µs"
	I0318 13:48:10.029784       1 event.go:307] "Event occurred" object="kube-system/kindnet-zhkmw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:48:10.052385       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-pxm42" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:48:14.869503       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-6k8dh"
	I0318 13:48:14.899137       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-6k8dh"
	I0318 13:48:14.899184       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-ff8vd"
	I0318 13:48:14.933629       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-ff8vd"
	
	
	==> kube-controller-manager [b957b85972f365fd37704ad6380794636e1bbbe3c19b5184ba72b5f3ac4f9f9b] <==
	I0318 13:40:48.584638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.630168ms"
	I0318 13:40:48.585576       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="30.767µs"
	I0318 13:41:23.142994       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:41:23.149496       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m03\" does not exist"
	I0318 13:41:23.169963       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6k8dh"
	I0318 13:41:23.176397       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:41:23.176856       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff8vd"
	I0318 13:41:26.692064       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-994669-m03"
	I0318 13:41:26.692393       1 event.go:307] "Event occurred" object="multinode-994669-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-994669-m03 event: Registered Node multinode-994669-m03 in Controller"
	I0318 13:41:32.974004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:03.686720       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:06.114758       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994669-m03\" does not exist"
	I0318 13:42:06.115936       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:06.128295       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994669-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:42:13.401669       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:56.760359       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-994669-m02"
	I0318 13:42:56.761096       1 event.go:307] "Event occurred" object="multinode-994669-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-994669-m03 status is now: NodeNotReady"
	I0318 13:42:56.778848       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ff8vd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:42:56.797895       1 event.go:307] "Event occurred" object="kube-system/kindnet-6k8dh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.808750       1 event.go:307] "Event occurred" object="multinode-994669-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-994669-m02 status is now: NodeNotReady"
	I0318 13:43:01.821034       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-pxm42" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.838997       1 event.go:307] "Event occurred" object="kube-system/kindnet-zhkmw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.861480       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8cd7k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:43:01.876783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="20.870424ms"
	I0318 13:43:01.877674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="135.292µs"
	
	
	==> kube-proxy [7affd38bc5a222dd61d9fabbfaed814247453910580baef0759f754587d9ca2e] <==
	I0318 13:39:58.364112       1 server_others.go:69] "Using iptables proxy"
	I0318 13:39:58.386912       1 node.go:141] Successfully retrieved node IP: 192.168.39.57
	I0318 13:39:58.432530       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:39:58.432570       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:39:58.435266       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:39:58.435371       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:39:58.436056       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:39:58.436097       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:39:58.438322       1 config.go:188] "Starting service config controller"
	I0318 13:39:58.438743       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:39:58.438837       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:39:58.438861       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:39:58.440137       1 config.go:315] "Starting node config controller"
	I0318 13:39:58.440173       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:39:58.539777       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:39:58.539833       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:39:58.540386       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [eec85d9c339487ddf79acfaa61c97552b23f26718c983d912f9d0e1293849064] <==
	I0318 13:46:03.450018       1 server_others.go:69] "Using iptables proxy"
	I0318 13:46:03.505830       1 node.go:141] Successfully retrieved node IP: 192.168.39.57
	I0318 13:46:03.575372       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:46:03.575398       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:46:03.581951       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:46:03.582069       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:46:03.582326       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:46:03.582338       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:46:03.584472       1 config.go:188] "Starting service config controller"
	I0318 13:46:03.584598       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:46:03.584683       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:46:03.584732       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:46:03.585341       1 config.go:315] "Starting node config controller"
	I0318 13:46:03.585410       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:46:03.686858       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:46:03.687027       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:46:03.687052       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [188be02cea85b0bbdd984396f95a0e661f6106b7765982c547f30d3198582558] <==
	E0318 13:39:40.713641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:39:40.713691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:39:40.713802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:39:40.713906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:39:40.713969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:39:41.552593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:39:41.552695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:39:41.625046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:39:41.625088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:39:41.714489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:39:41.714573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:39:41.747876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:39:41.748016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:39:41.803738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:39:41.803869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:39:41.972829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:39:41.973054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:39:42.003147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:39:42.003199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:39:42.026066       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:39:42.026118       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:39:44.397642       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:44:20.100769       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:44:20.100956       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0318 13:44:20.102076       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [be6e039078050199b0806102a9f3e9c27182a70c8eed11b4942614c08d327a2c] <==
	I0318 13:45:59.375052       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:46:02.093794       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:46:02.093901       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:46:02.093913       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:46:02.093921       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:46:02.141521       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:46:02.141558       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:46:02.143500       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:46:02.143764       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:46:02.143812       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:46:02.143862       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:46:02.245174       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:47:57 multinode-994669 kubelet[3056]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:47:57 multinode-994669 kubelet[3056]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.619575    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd6e47bdd6c27ccb21f6946ff8943791b/crio-44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86: Error finding container 44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86: Status 404 returned error can't find the container with id 44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.619984    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod48b3e25f-c978-46aa-b8d5-d40371519a5e/crio-5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804: Error finding container 5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804: Status 404 returned error can't find the container with id 5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.620401    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod87d45fc5ff8300974beb759dc4755c67/crio-2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d: Error finding container 2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d: Status 404 returned error can't find the container with id 2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.620692    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod94537a54-a7ff-4e1f-bf71-43d66bc78138/crio-fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e: Error finding container fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e: Status 404 returned error can't find the container with id fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.620860    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd8d8d68bba4c5da05ae4e5388cfe771f/crio-8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d: Error finding container 8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d: Status 404 returned error can't find the container with id 8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.621073    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podd46ff588-70f2-4b72-8951-c1d1518d7bd0/crio-1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb: Error finding container 1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb: Status 404 returned error can't find the container with id 1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.621379    3056 manager.go:1106] Failed to create existing container: /kubepods/pod9b18f931-1481-4999-9ff1-89fc4a11f2ec/crio-0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc: Error finding container 0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc: Status 404 returned error can't find the container with id 0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.621637    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod2d344d9f-d488-4d70-8e7a-bfbd1f4724b0/crio-15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f: Error finding container 15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f: Status 404 returned error can't find the container with id 15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f
	Mar 18 13:47:57 multinode-994669 kubelet[3056]: E0318 13:47:57.621864    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda947eabccfd6fe8f857f455d2bd38fd0/crio-503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e: Error finding container 503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e: Status 404 returned error can't find the container with id 503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.537789    3056 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:48:57 multinode-994669 kubelet[3056]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:48:57 multinode-994669 kubelet[3056]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:48:57 multinode-994669 kubelet[3056]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:48:57 multinode-994669 kubelet[3056]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.619908    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd8d8d68bba4c5da05ae4e5388cfe771f/crio-8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d: Error finding container 8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d: Status 404 returned error can't find the container with id 8147e55b30143d0fb503b9d0e5254e8c82373dab80f5c8750150a0aa1132414d
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.620367    3056 manager.go:1106] Failed to create existing container: /kubepods/pod9b18f931-1481-4999-9ff1-89fc4a11f2ec/crio-0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc: Error finding container 0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc: Status 404 returned error can't find the container with id 0139aab85c748eb48b1113bd902b2bb8e42b17c1158b84aed966930bed1984bc
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.620662    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod48b3e25f-c978-46aa-b8d5-d40371519a5e/crio-5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804: Error finding container 5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804: Status 404 returned error can't find the container with id 5f2cd94162ae3c2e12a8012ed62d21828059713f9e8566e81af6c407f1468804
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.620970    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/podd6e47bdd6c27ccb21f6946ff8943791b/crio-44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86: Error finding container 44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86: Status 404 returned error can't find the container with id 44104ccc2e9854976f3940f272961bb0915ac4824d94aa1d993f657f94de2a86
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.621263    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda947eabccfd6fe8f857f455d2bd38fd0/crio-503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e: Error finding container 503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e: Status 404 returned error can't find the container with id 503d0d94bd1d99264dc27dd66a4e32fc8e552b1c4b9fd6d6b06755e73993593e
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.621566    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podd46ff588-70f2-4b72-8951-c1d1518d7bd0/crio-1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb: Error finding container 1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb: Status 404 returned error can't find the container with id 1f23428e45300b62a10bf1ef0df6adf45749182e59b505ce08963746eddbaedb
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.621749    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod94537a54-a7ff-4e1f-bf71-43d66bc78138/crio-fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e: Error finding container fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e: Status 404 returned error can't find the container with id fbf2592d41d34b79debf8b28794dbe86928e9b903609b93b20fb1b32e33da86e
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.621970    3056 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod2d344d9f-d488-4d70-8e7a-bfbd1f4724b0/crio-15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f: Error finding container 15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f: Status 404 returned error can't find the container with id 15ba741fc0318aec32c63abfbff019c58a0c7d69b2af3cb63c272c092ac09b4f
	Mar 18 13:48:57 multinode-994669 kubelet[3056]: E0318 13:48:57.622190    3056 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod87d45fc5ff8300974beb759dc4755c67/crio-2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d: Error finding container 2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d: Status 404 returned error can't find the container with id 2623dcbffb530af266be24ce877de3bf4086f1aaeeb7de9940b907c1552af43d
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:49:48.269103 1103714 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18427-1067917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-994669 -n multinode-994669
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-994669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.71s)

                                                
                                    
x
+
TestPreload (278.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-210876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0318 13:54:17.919232 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-210876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.954920348s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-210876 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-210876 image pull gcr.io/k8s-minikube/busybox: (1.756886553s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-210876
E0318 13:57:37.321521 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-210876: exit status 82 (2m0.486649996s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-210876"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-210876 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-03-18 13:57:47.746077021 +0000 UTC m=+4385.539536213
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-210876 -n test-preload-210876
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-210876 -n test-preload-210876: exit status 3 (18.567556496s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:58:06.308234 1105997 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host
	E0318 13:58:06.308267 1105997 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-210876" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-210876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-210876
--- FAIL: TestPreload (278.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m36.155011052s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-140251] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-140251" primary control-plane node in "kubernetes-upgrade-140251" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:03:05.655952 1111401 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:03:05.656239 1111401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:03:05.656248 1111401 out.go:304] Setting ErrFile to fd 2...
	I0318 14:03:05.656252 1111401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:03:05.656453 1111401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:03:05.657116 1111401 out.go:298] Setting JSON to false
	I0318 14:03:05.658182 1111401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20733,"bootTime":1710749853,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:03:05.658253 1111401 start.go:139] virtualization: kvm guest
	I0318 14:03:05.660928 1111401 out.go:177] * [kubernetes-upgrade-140251] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:03:05.662939 1111401 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:03:05.662940 1111401 notify.go:220] Checking for updates...
	I0318 14:03:05.664423 1111401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:03:05.665912 1111401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:03:05.667400 1111401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:03:05.668869 1111401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:03:05.670237 1111401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:03:05.672248 1111401 config.go:182] Loaded profile config "NoKubernetes-091972": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0318 14:03:05.672393 1111401 config.go:182] Loaded profile config "cert-expiration-277126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:03:05.672531 1111401 config.go:182] Loaded profile config "running-upgrade-210993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0318 14:03:05.672715 1111401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:03:05.711730 1111401 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:03:05.713272 1111401 start.go:297] selected driver: kvm2
	I0318 14:03:05.713301 1111401 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:03:05.713315 1111401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:03:05.714124 1111401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:03:05.714230 1111401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:03:05.731367 1111401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:03:05.731423 1111401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 14:03:05.731638 1111401 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 14:03:05.731697 1111401 cni.go:84] Creating CNI manager for ""
	I0318 14:03:05.731716 1111401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:03:05.731727 1111401 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 14:03:05.731777 1111401 start.go:340] cluster config:
	{Name:kubernetes-upgrade-140251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:03:05.731907 1111401 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:03:05.735139 1111401 out.go:177] * Starting "kubernetes-upgrade-140251" primary control-plane node in "kubernetes-upgrade-140251" cluster
	I0318 14:03:05.736727 1111401 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:03:05.736790 1111401 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:03:05.736805 1111401 cache.go:56] Caching tarball of preloaded images
	I0318 14:03:05.736899 1111401 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:03:05.737000 1111401 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:03:05.737352 1111401 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/config.json ...
	I0318 14:03:05.737405 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/config.json: {Name:mkbd4f76a3881b8cf69caf0055e0cf598c63f144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:05.737614 1111401 start.go:360] acquireMachinesLock for kubernetes-upgrade-140251: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:03:12.213524 1111401 start.go:364] duration metric: took 6.47580898s to acquireMachinesLock for "kubernetes-upgrade-140251"
	I0318 14:03:12.213616 1111401 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-140251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:03:12.213765 1111401 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 14:03:12.216244 1111401 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 14:03:12.216453 1111401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:03:12.216491 1111401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:03:12.234605 1111401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0318 14:03:12.235096 1111401 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:03:12.235692 1111401 main.go:141] libmachine: Using API Version  1
	I0318 14:03:12.235724 1111401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:03:12.236114 1111401 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:03:12.236335 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetMachineName
	I0318 14:03:12.236500 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:12.236672 1111401 start.go:159] libmachine.API.Create for "kubernetes-upgrade-140251" (driver="kvm2")
	I0318 14:03:12.236709 1111401 client.go:168] LocalClient.Create starting
	I0318 14:03:12.236750 1111401 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 14:03:12.236792 1111401 main.go:141] libmachine: Decoding PEM data...
	I0318 14:03:12.236815 1111401 main.go:141] libmachine: Parsing certificate...
	I0318 14:03:12.236898 1111401 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 14:03:12.236926 1111401 main.go:141] libmachine: Decoding PEM data...
	I0318 14:03:12.236942 1111401 main.go:141] libmachine: Parsing certificate...
	I0318 14:03:12.236971 1111401 main.go:141] libmachine: Running pre-create checks...
	I0318 14:03:12.236991 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .PreCreateCheck
	I0318 14:03:12.237435 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetConfigRaw
	I0318 14:03:12.237852 1111401 main.go:141] libmachine: Creating machine...
	I0318 14:03:12.237870 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .Create
	I0318 14:03:12.238009 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Creating KVM machine...
	I0318 14:03:12.239361 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found existing default KVM network
	I0318 14:03:12.240871 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.240693 1111481 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0f:79:c9} reservation:<nil>}
	I0318 14:03:12.242244 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.242141 1111481 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:8f:05} reservation:<nil>}
	I0318 14:03:12.243618 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.243532 1111481 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289240}
	I0318 14:03:12.243669 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | created network xml: 
	I0318 14:03:12.243695 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | <network>
	I0318 14:03:12.243707 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |   <name>mk-kubernetes-upgrade-140251</name>
	I0318 14:03:12.243722 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |   <dns enable='no'/>
	I0318 14:03:12.243741 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |   
	I0318 14:03:12.243757 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0318 14:03:12.243770 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |     <dhcp>
	I0318 14:03:12.243782 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0318 14:03:12.243794 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |     </dhcp>
	I0318 14:03:12.243804 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |   </ip>
	I0318 14:03:12.243814 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG |   
	I0318 14:03:12.243822 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | </network>
	I0318 14:03:12.243862 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | 
	I0318 14:03:12.249479 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | trying to create private KVM network mk-kubernetes-upgrade-140251 192.168.61.0/24...
	I0318 14:03:12.323357 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | private KVM network mk-kubernetes-upgrade-140251 192.168.61.0/24 created
	I0318 14:03:12.323468 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251 ...
	I0318 14:03:12.323517 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:03:12.323533 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.323351 1111481 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:03:12.323612 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:03:12.571340 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.571199 1111481 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa...
	I0318 14:03:12.812267 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.812072 1111481 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/kubernetes-upgrade-140251.rawdisk...
	I0318 14:03:12.812316 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Writing magic tar header
	I0318 14:03:12.812333 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Writing SSH key tar header
	I0318 14:03:12.812345 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:12.812198 1111481 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251 ...
	I0318 14:03:12.812367 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251
	I0318 14:03:12.812384 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251 (perms=drwx------)
	I0318 14:03:12.812398 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 14:03:12.812416 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:03:12.812430 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 14:03:12.812446 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:03:12.812465 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 14:03:12.812479 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:03:12.812494 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 14:03:12.812509 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:03:12.812522 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:03:12.812535 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:03:12.812550 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Checking permissions on dir: /home
	I0318 14:03:12.812561 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Creating domain...
	I0318 14:03:12.812574 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Skipping /home - not owner
	I0318 14:03:12.813962 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) define libvirt domain using xml: 
	I0318 14:03:12.813995 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) <domain type='kvm'>
	I0318 14:03:12.814007 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <name>kubernetes-upgrade-140251</name>
	I0318 14:03:12.814019 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <memory unit='MiB'>2200</memory>
	I0318 14:03:12.814027 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <vcpu>2</vcpu>
	I0318 14:03:12.814034 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <features>
	I0318 14:03:12.814042 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <acpi/>
	I0318 14:03:12.814053 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <apic/>
	I0318 14:03:12.814065 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <pae/>
	I0318 14:03:12.814074 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     
	I0318 14:03:12.814088 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   </features>
	I0318 14:03:12.814102 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <cpu mode='host-passthrough'>
	I0318 14:03:12.814115 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   
	I0318 14:03:12.814123 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   </cpu>
	I0318 14:03:12.814136 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <os>
	I0318 14:03:12.814145 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <type>hvm</type>
	I0318 14:03:12.814159 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <boot dev='cdrom'/>
	I0318 14:03:12.814174 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <boot dev='hd'/>
	I0318 14:03:12.814188 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <bootmenu enable='no'/>
	I0318 14:03:12.814200 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   </os>
	I0318 14:03:12.814213 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   <devices>
	I0318 14:03:12.814225 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <disk type='file' device='cdrom'>
	I0318 14:03:12.814242 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/boot2docker.iso'/>
	I0318 14:03:12.814260 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <target dev='hdc' bus='scsi'/>
	I0318 14:03:12.814271 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <readonly/>
	I0318 14:03:12.814282 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </disk>
	I0318 14:03:12.814296 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <disk type='file' device='disk'>
	I0318 14:03:12.814312 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:03:12.814331 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/kubernetes-upgrade-140251.rawdisk'/>
	I0318 14:03:12.814348 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <target dev='hda' bus='virtio'/>
	I0318 14:03:12.814362 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </disk>
	I0318 14:03:12.814374 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <interface type='network'>
	I0318 14:03:12.814405 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <source network='mk-kubernetes-upgrade-140251'/>
	I0318 14:03:12.814422 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <model type='virtio'/>
	I0318 14:03:12.814436 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </interface>
	I0318 14:03:12.814448 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <interface type='network'>
	I0318 14:03:12.814463 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <source network='default'/>
	I0318 14:03:12.814475 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <model type='virtio'/>
	I0318 14:03:12.814488 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </interface>
	I0318 14:03:12.814504 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <serial type='pty'>
	I0318 14:03:12.814518 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <target port='0'/>
	I0318 14:03:12.814529 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </serial>
	I0318 14:03:12.814543 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <console type='pty'>
	I0318 14:03:12.814555 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <target type='serial' port='0'/>
	I0318 14:03:12.814568 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </console>
	I0318 14:03:12.814590 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     <rng model='virtio'>
	I0318 14:03:12.814606 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)       <backend model='random'>/dev/random</backend>
	I0318 14:03:12.814617 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     </rng>
	I0318 14:03:12.814628 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     
	I0318 14:03:12.814639 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)     
	I0318 14:03:12.814692 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251)   </devices>
	I0318 14:03:12.814731 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) </domain>
	I0318 14:03:12.814745 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) 
	I0318 14:03:12.819321 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:de:5b:df in network default
	I0318 14:03:12.819998 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:12.820024 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Ensuring networks are active...
	I0318 14:03:12.820781 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Ensuring network default is active
	I0318 14:03:12.821113 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Ensuring network mk-kubernetes-upgrade-140251 is active
	I0318 14:03:12.821671 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Getting domain xml...
	I0318 14:03:12.822522 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Creating domain...
	I0318 14:03:14.153893 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Waiting to get IP...
	I0318 14:03:14.154803 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:14.155465 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:14.155496 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:14.155196 1111481 retry.go:31] will retry after 215.793617ms: waiting for machine to come up
	I0318 14:03:14.372944 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:14.373448 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:14.373488 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:14.373408 1111481 retry.go:31] will retry after 347.913603ms: waiting for machine to come up
	I0318 14:03:14.723058 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:14.723615 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:14.723644 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:14.723562 1111481 retry.go:31] will retry after 384.662832ms: waiting for machine to come up
	I0318 14:03:15.110072 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:15.110553 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:15.110581 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:15.110508 1111481 retry.go:31] will retry after 512.97504ms: waiting for machine to come up
	I0318 14:03:15.625120 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:15.625669 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:15.625706 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:15.625602 1111481 retry.go:31] will retry after 545.070223ms: waiting for machine to come up
	I0318 14:03:16.172225 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:16.172737 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:16.172789 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:16.172693 1111481 retry.go:31] will retry after 891.579638ms: waiting for machine to come up
	I0318 14:03:17.065985 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:17.066590 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:17.066625 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:17.066515 1111481 retry.go:31] will retry after 1.172204567s: waiting for machine to come up
	I0318 14:03:18.240561 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:18.241068 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:18.241105 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:18.241006 1111481 retry.go:31] will retry after 1.1900498s: waiting for machine to come up
	I0318 14:03:19.433432 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:19.433915 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:19.433958 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:19.433852 1111481 retry.go:31] will retry after 1.682427343s: waiting for machine to come up
	I0318 14:03:21.118451 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:21.119263 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:21.119301 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:21.119184 1111481 retry.go:31] will retry after 1.515186101s: waiting for machine to come up
	I0318 14:03:22.636583 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:22.637135 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:22.637173 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:22.637035 1111481 retry.go:31] will retry after 2.187384029s: waiting for machine to come up
	I0318 14:03:24.827776 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:24.828343 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:24.828369 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:24.828286 1111481 retry.go:31] will retry after 2.351760257s: waiting for machine to come up
	I0318 14:03:27.182024 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:27.182604 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:27.182678 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:27.182578 1111481 retry.go:31] will retry after 2.914817924s: waiting for machine to come up
	I0318 14:03:30.101143 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:30.101635 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find current IP address of domain kubernetes-upgrade-140251 in network mk-kubernetes-upgrade-140251
	I0318 14:03:30.101649 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | I0318 14:03:30.101612 1111481 retry.go:31] will retry after 4.018780281s: waiting for machine to come up
	I0318 14:03:34.122099 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.122598 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Found IP for machine: 192.168.61.54
	I0318 14:03:34.122627 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Reserving static IP address...
	I0318 14:03:34.122646 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has current primary IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.122991 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-140251", mac: "52:54:00:10:9c:c2", ip: "192.168.61.54"} in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.205323 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Getting to WaitForSSH function...
	I0318 14:03:34.205358 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Reserved static IP address: 192.168.61.54
	I0318 14:03:34.205372 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Waiting for SSH to be available...
	I0318 14:03:34.208546 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.209092 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.209122 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.209321 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Using SSH client type: external
	I0318 14:03:34.209348 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa (-rw-------)
	I0318 14:03:34.209389 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:03:34.209408 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | About to run SSH command:
	I0318 14:03:34.209422 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | exit 0
	I0318 14:03:34.332107 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | SSH cmd err, output: <nil>: 
	I0318 14:03:34.332367 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) KVM machine creation complete!
	I0318 14:03:34.332738 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetConfigRaw
	I0318 14:03:34.333436 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:34.333655 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:34.333864 1111401 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 14:03:34.333884 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetState
	I0318 14:03:34.335408 1111401 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 14:03:34.335422 1111401 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 14:03:34.335428 1111401 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 14:03:34.335434 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:34.338301 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.338681 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.338715 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.338905 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:34.339112 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.339340 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.339534 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:34.339730 1111401 main.go:141] libmachine: Using SSH client type: native
	I0318 14:03:34.340006 1111401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0318 14:03:34.340021 1111401 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 14:03:34.439333 1111401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:03:34.439359 1111401 main.go:141] libmachine: Detecting the provisioner...
	I0318 14:03:34.439367 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:34.442486 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.442932 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.442969 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.443123 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:34.443361 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.443523 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.443641 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:34.443778 1111401 main.go:141] libmachine: Using SSH client type: native
	I0318 14:03:34.443975 1111401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0318 14:03:34.443989 1111401 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 14:03:34.544964 1111401 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 14:03:34.545027 1111401 main.go:141] libmachine: found compatible host: buildroot
	I0318 14:03:34.545034 1111401 main.go:141] libmachine: Provisioning with buildroot...
	I0318 14:03:34.545043 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetMachineName
	I0318 14:03:34.545292 1111401 buildroot.go:166] provisioning hostname "kubernetes-upgrade-140251"
	I0318 14:03:34.545305 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetMachineName
	I0318 14:03:34.545524 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:34.548355 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.548691 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.548716 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.548930 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:34.549112 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.549258 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.549404 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:34.549560 1111401 main.go:141] libmachine: Using SSH client type: native
	I0318 14:03:34.549782 1111401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0318 14:03:34.549800 1111401 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-140251 && echo "kubernetes-upgrade-140251" | sudo tee /etc/hostname
	I0318 14:03:34.664072 1111401 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-140251
	
	I0318 14:03:34.664108 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:34.667188 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.667588 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.667617 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.667861 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:34.668096 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.668284 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.668423 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:34.668583 1111401 main.go:141] libmachine: Using SSH client type: native
	I0318 14:03:34.668780 1111401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0318 14:03:34.668801 1111401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-140251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-140251/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-140251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:03:34.778249 1111401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:03:34.778289 1111401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:03:34.778343 1111401 buildroot.go:174] setting up certificates
	I0318 14:03:34.778364 1111401 provision.go:84] configureAuth start
	I0318 14:03:34.778380 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetMachineName
	I0318 14:03:34.778726 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetIP
	I0318 14:03:34.781575 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.781970 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.782002 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.782167 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:34.784736 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.785131 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.785160 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.785301 1111401 provision.go:143] copyHostCerts
	I0318 14:03:34.785409 1111401 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:03:34.785425 1111401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:03:34.785485 1111401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:03:34.785603 1111401 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:03:34.785614 1111401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:03:34.785636 1111401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:03:34.785691 1111401 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:03:34.785701 1111401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:03:34.785718 1111401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:03:34.785776 1111401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-140251 san=[127.0.0.1 192.168.61.54 kubernetes-upgrade-140251 localhost minikube]
	I0318 14:03:34.900857 1111401 provision.go:177] copyRemoteCerts
	I0318 14:03:34.900927 1111401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:03:34.900956 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:34.903618 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.904002 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:34.904031 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:34.904259 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:34.904462 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:34.904627 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:34.904810 1111401 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa Username:docker}
	I0318 14:03:34.984864 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:03:35.012210 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0318 14:03:35.041948 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:03:35.067896 1111401 provision.go:87] duration metric: took 289.5129ms to configureAuth
	I0318 14:03:35.067942 1111401 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:03:35.068164 1111401 config.go:182] Loaded profile config "kubernetes-upgrade-140251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:03:35.068276 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:35.071055 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.071366 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.071410 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.071536 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:35.071756 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.071959 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.072148 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:35.072346 1111401 main.go:141] libmachine: Using SSH client type: native
	I0318 14:03:35.072571 1111401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0318 14:03:35.072597 1111401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:03:35.358801 1111401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:03:35.358831 1111401 main.go:141] libmachine: Checking connection to Docker...
	I0318 14:03:35.358839 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetURL
	I0318 14:03:35.360248 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Using libvirt version 6000000
	I0318 14:03:35.362600 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.363011 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.363035 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.363227 1111401 main.go:141] libmachine: Docker is up and running!
	I0318 14:03:35.363247 1111401 main.go:141] libmachine: Reticulating splines...
	I0318 14:03:35.363256 1111401 client.go:171] duration metric: took 23.126534986s to LocalClient.Create
	I0318 14:03:35.363284 1111401 start.go:167] duration metric: took 23.126612132s to libmachine.API.Create "kubernetes-upgrade-140251"
	I0318 14:03:35.363293 1111401 start.go:293] postStartSetup for "kubernetes-upgrade-140251" (driver="kvm2")
	I0318 14:03:35.363304 1111401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:03:35.363327 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:35.363594 1111401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:03:35.363627 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:35.365751 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.366096 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.366122 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.366275 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:35.366453 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.366584 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:35.366720 1111401 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa Username:docker}
	I0318 14:03:35.451501 1111401 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:03:35.455966 1111401 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:03:35.455998 1111401 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:03:35.456067 1111401 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:03:35.456175 1111401 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:03:35.456301 1111401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:03:35.466772 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:03:35.492295 1111401 start.go:296] duration metric: took 128.986383ms for postStartSetup
	I0318 14:03:35.492355 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetConfigRaw
	I0318 14:03:35.492991 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetIP
	I0318 14:03:35.495768 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.496164 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.496203 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.496470 1111401 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/config.json ...
	I0318 14:03:35.496734 1111401 start.go:128] duration metric: took 23.282948915s to createHost
	I0318 14:03:35.496780 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:35.499082 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.499426 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.499448 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.499646 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:35.499875 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.500092 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.500304 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:35.500513 1111401 main.go:141] libmachine: Using SSH client type: native
	I0318 14:03:35.500739 1111401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I0318 14:03:35.500756 1111401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 14:03:35.600699 1111401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710770615.567531387
	
	I0318 14:03:35.600736 1111401 fix.go:216] guest clock: 1710770615.567531387
	I0318 14:03:35.600748 1111401 fix.go:229] Guest: 2024-03-18 14:03:35.567531387 +0000 UTC Remote: 2024-03-18 14:03:35.496750821 +0000 UTC m=+29.890865452 (delta=70.780566ms)
	I0318 14:03:35.600830 1111401 fix.go:200] guest clock delta is within tolerance: 70.780566ms
	I0318 14:03:35.600840 1111401 start.go:83] releasing machines lock for "kubernetes-upgrade-140251", held for 23.387276417s
	I0318 14:03:35.600888 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:35.601237 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetIP
	I0318 14:03:35.604123 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.604534 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.604567 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.604732 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:35.605305 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:35.605503 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:03:35.605623 1111401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:03:35.605687 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:35.605788 1111401 ssh_runner.go:195] Run: cat /version.json
	I0318 14:03:35.605832 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:03:35.608486 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.608684 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.608850 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.608873 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.608999 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:35.609129 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:35.609153 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:35.609158 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.609367 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:03:35.609402 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:35.609620 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:03:35.609640 1111401 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa Username:docker}
	I0318 14:03:35.609819 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:03:35.610007 1111401 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa Username:docker}
	I0318 14:03:35.685639 1111401 ssh_runner.go:195] Run: systemctl --version
	I0318 14:03:35.710516 1111401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:03:35.876911 1111401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:03:35.883768 1111401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:03:35.883873 1111401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:03:35.902791 1111401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:03:35.902818 1111401 start.go:494] detecting cgroup driver to use...
	I0318 14:03:35.902902 1111401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:03:35.920245 1111401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:03:35.935609 1111401 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:03:35.935694 1111401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:03:35.951494 1111401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:03:35.967268 1111401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:03:36.094687 1111401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:03:36.250168 1111401 docker.go:233] disabling docker service ...
	I0318 14:03:36.250249 1111401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:03:36.266908 1111401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:03:36.282508 1111401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:03:36.439701 1111401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:03:36.586600 1111401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:03:36.602142 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:03:36.625817 1111401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:03:36.625911 1111401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:03:36.638337 1111401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:03:36.638422 1111401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:03:36.653982 1111401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:03:36.668311 1111401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:03:36.679473 1111401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:03:36.691329 1111401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:03:36.702201 1111401 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:03:36.702278 1111401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:03:36.718582 1111401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:03:36.730287 1111401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:03:36.865679 1111401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:03:37.053581 1111401 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:03:37.053676 1111401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:03:37.059263 1111401 start.go:562] Will wait 60s for crictl version
	I0318 14:03:37.059332 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:37.063702 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:03:37.105196 1111401 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:03:37.105293 1111401 ssh_runner.go:195] Run: crio --version
	I0318 14:03:37.137698 1111401 ssh_runner.go:195] Run: crio --version
	I0318 14:03:37.172669 1111401 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:03:37.173996 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetIP
	I0318 14:03:37.177111 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:37.177524 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:03:27 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:03:37.177555 1111401 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:03:37.177746 1111401 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:03:37.182426 1111401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:03:37.196910 1111401 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-140251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-140251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:03:37.197037 1111401 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:03:37.197093 1111401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:03:37.232430 1111401 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:03:37.232522 1111401 ssh_runner.go:195] Run: which lz4
	I0318 14:03:37.236851 1111401 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 14:03:37.241642 1111401 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:03:37.241681 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:03:39.174387 1111401 crio.go:444] duration metric: took 1.937578291s to copy over tarball
	I0318 14:03:39.174460 1111401 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:03:42.037664 1111401 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.863162538s)
	I0318 14:03:42.037702 1111401 crio.go:451] duration metric: took 2.863282992s to extract the tarball
	I0318 14:03:42.037713 1111401 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:03:42.097578 1111401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:03:42.154529 1111401 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:03:42.154561 1111401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:03:42.154620 1111401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:03:42.154635 1111401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:03:42.154662 1111401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:03:42.154688 1111401 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:03:42.154702 1111401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:03:42.154737 1111401 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:03:42.154647 1111401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:03:42.154780 1111401 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:03:42.156325 1111401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:03:42.156337 1111401 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:03:42.156373 1111401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:03:42.156396 1111401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:03:42.156420 1111401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:03:42.156445 1111401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:03:42.156489 1111401 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:03:42.156553 1111401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:03:42.309123 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:03:42.316702 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:03:42.317745 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:03:42.321668 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:03:42.322656 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:03:42.331097 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:03:42.346740 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:03:42.416609 1111401 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:03:42.416667 1111401 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:03:42.416719 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.477474 1111401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:03:42.477535 1111401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:03:42.477586 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.521588 1111401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:03:42.521649 1111401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:03:42.521708 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.532887 1111401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:03:42.532947 1111401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:03:42.533010 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.536728 1111401 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:03:42.536774 1111401 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:03:42.536826 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.536833 1111401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:03:42.536872 1111401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:03:42.536927 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.544563 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:03:42.544563 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:03:42.544606 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:03:42.544799 1111401 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:03:42.544837 1111401 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:03:42.544868 1111401 ssh_runner.go:195] Run: which crictl
	I0318 14:03:42.545307 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:03:42.545390 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:03:42.559738 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:03:42.666307 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:03:42.692277 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:03:42.692313 1111401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:03:42.692408 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:03:42.701378 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:03:42.701472 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:03:42.701512 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:03:42.736105 1111401 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:03:42.861276 1111401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:03:43.006438 1111401 cache_images.go:92] duration metric: took 851.854574ms to LoadCachedImages
	W0318 14:03:43.006551 1111401 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 14:03:43.006568 1111401 kubeadm.go:928] updating node { 192.168.61.54 8443 v1.20.0 crio true true} ...
	I0318 14:03:43.006706 1111401 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-140251 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:03:43.006796 1111401 ssh_runner.go:195] Run: crio config
	I0318 14:03:43.079547 1111401 cni.go:84] Creating CNI manager for ""
	I0318 14:03:43.079579 1111401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:03:43.079595 1111401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:03:43.079622 1111401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.54 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-140251 NodeName:kubernetes-upgrade-140251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:03:43.079822 1111401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-140251"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:03:43.079918 1111401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:03:43.094491 1111401 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:03:43.094579 1111401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:03:43.108584 1111401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0318 14:03:43.130919 1111401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:03:43.155270 1111401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:03:43.178721 1111401 ssh_runner.go:195] Run: grep 192.168.61.54	control-plane.minikube.internal$ /etc/hosts
	I0318 14:03:43.184816 1111401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:03:43.203857 1111401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:03:43.351677 1111401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:03:43.371674 1111401 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251 for IP: 192.168.61.54
	I0318 14:03:43.371709 1111401 certs.go:194] generating shared ca certs ...
	I0318 14:03:43.371734 1111401 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.371942 1111401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:03:43.371999 1111401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:03:43.372012 1111401 certs.go:256] generating profile certs ...
	I0318 14:03:43.372092 1111401 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.key
	I0318 14:03:43.372113 1111401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.crt with IP's: []
	I0318 14:03:43.453627 1111401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.crt ...
	I0318 14:03:43.453679 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.crt: {Name:mkdfade95375c89876568dedcc1aad2a7a463902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.453917 1111401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.key ...
	I0318 14:03:43.453953 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.key: {Name:mk1f9e3fe1fa260b8d0e6051dabbf1b21fa269dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.454066 1111401 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.key.912c17aa
	I0318 14:03:43.454098 1111401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.crt.912c17aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.54]
	I0318 14:03:43.652572 1111401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.crt.912c17aa ...
	I0318 14:03:43.652608 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.crt.912c17aa: {Name:mk5007589285d4819a3e411784b969bec46d0384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.701509 1111401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.key.912c17aa ...
	I0318 14:03:43.701561 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.key.912c17aa: {Name:mk48787eac3b29a639fa07758832e0df8b5f3cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.701773 1111401 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.crt.912c17aa -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.crt
	I0318 14:03:43.701886 1111401 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.key.912c17aa -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.key
	I0318 14:03:43.701965 1111401 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.key
	I0318 14:03:43.701991 1111401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.crt with IP's: []
	I0318 14:03:43.826162 1111401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.crt ...
	I0318 14:03:43.826208 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.crt: {Name:mkd3abe6536e808adea9b855775a271d44017c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.861361 1111401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.key ...
	I0318 14:03:43.861412 1111401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.key: {Name:mkb202ff3f35a67884cf3a57679b233f36f3c922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:03:43.861728 1111401 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:03:43.861790 1111401 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:03:43.861801 1111401 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:03:43.861834 1111401 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:03:43.861866 1111401 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:03:43.861898 1111401 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:03:43.861952 1111401 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:03:43.862847 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:03:43.896205 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:03:43.927947 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:03:43.958673 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:03:43.990607 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 14:03:44.050354 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:03:44.081344 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:03:44.114758 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:03:44.145593 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:03:44.177373 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:03:44.214355 1111401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:03:44.262081 1111401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:03:44.297846 1111401 ssh_runner.go:195] Run: openssl version
	I0318 14:03:44.306081 1111401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:03:44.324928 1111401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:03:44.331232 1111401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:03:44.331326 1111401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:03:44.338457 1111401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:03:44.350939 1111401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:03:44.362970 1111401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:03:44.368364 1111401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:03:44.368554 1111401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:03:44.375383 1111401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:03:44.391115 1111401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:03:44.404193 1111401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:03:44.409805 1111401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:03:44.409889 1111401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:03:44.417028 1111401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:03:44.435538 1111401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:03:44.440747 1111401 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 14:03:44.440824 1111401 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-140251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-140251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:03:44.440937 1111401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:03:44.441005 1111401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:03:44.483718 1111401 cri.go:89] found id: ""
	I0318 14:03:44.483806 1111401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 14:03:44.495115 1111401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:03:44.509217 1111401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:03:44.522888 1111401 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:03:44.522917 1111401 kubeadm.go:156] found existing configuration files:
	
	I0318 14:03:44.522982 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:03:44.534124 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:03:44.534214 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:03:44.545218 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:03:44.555581 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:03:44.555643 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:03:44.566439 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:03:44.578160 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:03:44.578254 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:03:44.590637 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:03:44.603452 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:03:44.603535 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:03:44.617279 1111401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:03:44.756709 1111401 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:03:44.756780 1111401 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:03:44.976290 1111401 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:03:44.976464 1111401 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:03:44.976625 1111401 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:03:45.233067 1111401 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:03:45.301630 1111401 out.go:204]   - Generating certificates and keys ...
	I0318 14:03:45.301782 1111401 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:03:45.301898 1111401 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:03:45.398165 1111401 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 14:03:46.121739 1111401 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 14:03:46.439862 1111401 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 14:03:46.658802 1111401 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 14:03:46.782629 1111401 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 14:03:46.782840 1111401 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-140251 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	I0318 14:03:46.860407 1111401 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 14:03:46.860670 1111401 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-140251 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	I0318 14:03:46.961585 1111401 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 14:03:47.252635 1111401 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 14:03:47.333348 1111401 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 14:03:47.333489 1111401 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:03:47.505550 1111401 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:03:47.655482 1111401 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:03:47.847767 1111401 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:03:47.986731 1111401 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:03:48.004338 1111401 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:03:48.007490 1111401 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:03:48.007599 1111401 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:03:48.149521 1111401 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:03:48.151648 1111401 out.go:204]   - Booting up control plane ...
	I0318 14:03:48.151799 1111401 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:03:48.157103 1111401 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:03:48.158693 1111401 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:03:48.159428 1111401 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:03:48.163600 1111401 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:04:28.155657 1111401 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:04:28.155786 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:04:28.156069 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:04:33.156500 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:04:33.156796 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:04:43.154729 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:04:43.155016 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:05:03.152438 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:05:03.152700 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:05:43.153230 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:05:43.153519 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:05:43.153539 1111401 kubeadm.go:309] 
	I0318 14:05:43.153587 1111401 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:05:43.153644 1111401 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:05:43.153655 1111401 kubeadm.go:309] 
	I0318 14:05:43.153712 1111401 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:05:43.153762 1111401 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:05:43.153896 1111401 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:05:43.153906 1111401 kubeadm.go:309] 
	I0318 14:05:43.154029 1111401 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:05:43.154076 1111401 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:05:43.154120 1111401 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:05:43.154131 1111401 kubeadm.go:309] 
	I0318 14:05:43.154278 1111401 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:05:43.154384 1111401 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:05:43.154397 1111401 kubeadm.go:309] 
	I0318 14:05:43.154520 1111401 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:05:43.154633 1111401 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:05:43.154733 1111401 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:05:43.154838 1111401 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:05:43.154852 1111401 kubeadm.go:309] 
	I0318 14:05:43.155560 1111401 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:05:43.155693 1111401 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:05:43.155788 1111401 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:05:43.155997 1111401 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-140251 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-140251 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-140251 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-140251 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:05:43.156065 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:05:44.456725 1111401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.300627704s)
	I0318 14:05:44.456816 1111401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:05:44.476955 1111401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:05:44.492987 1111401 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:05:44.493016 1111401 kubeadm.go:156] found existing configuration files:
	
	I0318 14:05:44.493075 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:05:44.507819 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:05:44.507920 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:05:44.523641 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:05:44.538700 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:05:44.538789 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:05:44.554343 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:05:44.567750 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:05:44.567860 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:05:44.585288 1111401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:05:44.598659 1111401 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:05:44.598737 1111401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:05:44.614698 1111401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:05:44.697800 1111401 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:05:44.697893 1111401 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:05:44.855007 1111401 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:05:44.855192 1111401 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:05:44.855358 1111401 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:05:45.071243 1111401 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:05:45.074246 1111401 out.go:204]   - Generating certificates and keys ...
	I0318 14:05:45.074368 1111401 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:05:45.074512 1111401 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:05:45.074652 1111401 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:05:45.074765 1111401 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:05:45.074871 1111401 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:05:45.074937 1111401 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:05:45.075025 1111401 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:05:45.075139 1111401 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:05:45.075250 1111401 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:05:45.075695 1111401 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:05:45.075783 1111401 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:05:45.075922 1111401 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:05:45.367862 1111401 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:05:45.634690 1111401 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:05:45.790046 1111401 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:05:45.850473 1111401 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:05:45.866973 1111401 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:05:45.867151 1111401 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:05:45.867234 1111401 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:05:46.063358 1111401 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:05:46.065471 1111401 out.go:204]   - Booting up control plane ...
	I0318 14:05:46.065588 1111401 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:05:46.080357 1111401 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:05:46.080481 1111401 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:05:46.080614 1111401 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:05:46.082069 1111401 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:06:26.084464 1111401 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:06:26.085077 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:06:26.085364 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:06:31.085923 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:06:31.086096 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:06:41.086853 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:06:41.087107 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:07:01.087706 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:07:01.087984 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:07:41.089879 1111401 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:07:41.090191 1111401 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:07:41.090220 1111401 kubeadm.go:309] 
	I0318 14:07:41.090303 1111401 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:07:41.090383 1111401 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:07:41.090401 1111401 kubeadm.go:309] 
	I0318 14:07:41.090474 1111401 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:07:41.090518 1111401 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:07:41.090639 1111401 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:07:41.090651 1111401 kubeadm.go:309] 
	I0318 14:07:41.090787 1111401 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:07:41.090836 1111401 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:07:41.090890 1111401 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:07:41.090900 1111401 kubeadm.go:309] 
	I0318 14:07:41.091054 1111401 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:07:41.091188 1111401 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:07:41.091203 1111401 kubeadm.go:309] 
	I0318 14:07:41.091366 1111401 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:07:41.091495 1111401 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:07:41.091602 1111401 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:07:41.091663 1111401 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:07:41.091673 1111401 kubeadm.go:309] 
	I0318 14:07:41.091928 1111401 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:07:41.092058 1111401 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:07:41.092158 1111401 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:07:41.092234 1111401 kubeadm.go:393] duration metric: took 3m56.651419136s to StartCluster
	I0318 14:07:41.092284 1111401 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:07:41.092344 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:07:41.140996 1111401 cri.go:89] found id: ""
	I0318 14:07:41.141042 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.141051 1111401 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:07:41.141060 1111401 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:07:41.141129 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:07:41.180097 1111401 cri.go:89] found id: ""
	I0318 14:07:41.180145 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.180154 1111401 logs.go:278] No container was found matching "etcd"
	I0318 14:07:41.180162 1111401 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:07:41.180225 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:07:41.217774 1111401 cri.go:89] found id: ""
	I0318 14:07:41.217813 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.217825 1111401 logs.go:278] No container was found matching "coredns"
	I0318 14:07:41.217833 1111401 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:07:41.217937 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:07:41.258933 1111401 cri.go:89] found id: ""
	I0318 14:07:41.258966 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.258978 1111401 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:07:41.258987 1111401 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:07:41.259057 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:07:41.297764 1111401 cri.go:89] found id: ""
	I0318 14:07:41.297794 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.297803 1111401 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:07:41.297811 1111401 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:07:41.297880 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:07:41.340809 1111401 cri.go:89] found id: ""
	I0318 14:07:41.340844 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.340855 1111401 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:07:41.340862 1111401 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:07:41.340962 1111401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:07:41.381794 1111401 cri.go:89] found id: ""
	I0318 14:07:41.381833 1111401 logs.go:276] 0 containers: []
	W0318 14:07:41.381845 1111401 logs.go:278] No container was found matching "kindnet"
	I0318 14:07:41.381861 1111401 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:07:41.381879 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:07:41.477016 1111401 logs.go:123] Gathering logs for container status ...
	I0318 14:07:41.477061 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:07:41.521939 1111401 logs.go:123] Gathering logs for kubelet ...
	I0318 14:07:41.521976 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:07:41.587309 1111401 logs.go:123] Gathering logs for dmesg ...
	I0318 14:07:41.587365 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:07:41.604016 1111401 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:07:41.604051 1111401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:07:41.738737 1111401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 14:07:41.738815 1111401 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:07:41.738873 1111401 out.go:239] * 
	* 
	W0318 14:07:41.738941 1111401 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:07:41.738974 1111401 out.go:239] * 
	* 
	W0318 14:07:41.739853 1111401 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:07:41.743759 1111401 out.go:177] 
	W0318 14:07:41.745249 1111401 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:07:41.745382 1111401 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:07:41.745455 1111401 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:07:41.747633 1111401 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-140251
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-140251: (2.33630932s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-140251 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-140251 status --format={{.Host}}: exit status 7 (92.273603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.030935881s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-140251 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (120.260072ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-140251] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-140251
	    minikube start -p kubernetes-upgrade-140251 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1402512 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-140251 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-140251 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (18.083371314s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-18 14:08:55.533533199 +0000 UTC m=+5053.326992404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-140251 -n kubernetes-upgrade-140251
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-140251 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-140251 logs -n 25: (1.453011443s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-059272 sudo                               | kindnet-059272            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-059272 sudo                               | kindnet-059272            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo cat                              | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-059272 sudo find                          | kindnet-059272            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo docker                           | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-059272 sudo crio                          | kindnet-059272            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p kindnet-059272                                    | kindnet-059272            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	| ssh     | -p auto-059272 sudo cat                              | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo cat                              | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo                                  | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| start   | -p custom-flannel-059272                             | custom-flannel-059272     | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo cat                              | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo cat                              | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo containerd                       | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo systemctl                        | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo find                             | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-059272 sudo crio                             | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-059272                                       | auto-059272               | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	| start   | -p enable-default-cni-059272                         | enable-default-cni-059272 | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:08:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:08:52.704705 1118221 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:08:52.704887 1118221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:08:52.704901 1118221 out.go:304] Setting ErrFile to fd 2...
	I0318 14:08:52.704906 1118221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:08:52.705122 1118221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:08:52.705814 1118221 out.go:298] Setting JSON to false
	I0318 14:08:52.707245 1118221 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21080,"bootTime":1710749853,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:08:52.707331 1118221 start.go:139] virtualization: kvm guest
	I0318 14:08:52.709814 1118221 out.go:177] * [enable-default-cni-059272] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:08:52.711310 1118221 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:08:52.711307 1118221 notify.go:220] Checking for updates...
	I0318 14:08:52.713005 1118221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:08:52.714664 1118221 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:08:52.716473 1118221 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:08:52.718265 1118221 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:08:52.719848 1118221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:08:52.721663 1118221 config.go:182] Loaded profile config "calico-059272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:08:52.721781 1118221 config.go:182] Loaded profile config "custom-flannel-059272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:08:52.721881 1118221 config.go:182] Loaded profile config "kubernetes-upgrade-140251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:08:52.722020 1118221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:08:52.766375 1118221 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:08:52.767722 1118221 start.go:297] selected driver: kvm2
	I0318 14:08:52.767740 1118221 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:08:52.767751 1118221 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:08:52.768599 1118221 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:08:52.768696 1118221 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:08:52.786482 1118221 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:08:52.786545 1118221 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0318 14:08:52.786829 1118221 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0318 14:08:52.786867 1118221 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:08:52.786956 1118221 cni.go:84] Creating CNI manager for "bridge"
	I0318 14:08:52.786972 1118221 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 14:08:52.787047 1118221 start.go:340] cluster config:
	{Name:enable-default-cni-059272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-059272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:08:52.787186 1118221 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:08:52.788888 1118221 out.go:177] * Starting "enable-default-cni-059272" primary control-plane node in "enable-default-cni-059272" cluster
	I0318 14:08:48.416459 1117758 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 14:08:48.416622 1117758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:48.416667 1117758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:48.434539 1117758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I0318 14:08:48.435120 1117758 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:48.435745 1117758 main.go:141] libmachine: Using API Version  1
	I0318 14:08:48.435774 1117758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:48.436172 1117758 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:48.436393 1117758 main.go:141] libmachine: (custom-flannel-059272) Calling .GetMachineName
	I0318 14:08:48.436584 1117758 main.go:141] libmachine: (custom-flannel-059272) Calling .DriverName
	I0318 14:08:48.436767 1117758 start.go:159] libmachine.API.Create for "custom-flannel-059272" (driver="kvm2")
	I0318 14:08:48.436805 1117758 client.go:168] LocalClient.Create starting
	I0318 14:08:48.436856 1117758 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 14:08:48.436913 1117758 main.go:141] libmachine: Decoding PEM data...
	I0318 14:08:48.436938 1117758 main.go:141] libmachine: Parsing certificate...
	I0318 14:08:48.437032 1117758 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 14:08:48.437074 1117758 main.go:141] libmachine: Decoding PEM data...
	I0318 14:08:48.437089 1117758 main.go:141] libmachine: Parsing certificate...
	I0318 14:08:48.437121 1117758 main.go:141] libmachine: Running pre-create checks...
	I0318 14:08:48.437134 1117758 main.go:141] libmachine: (custom-flannel-059272) Calling .PreCreateCheck
	I0318 14:08:48.437518 1117758 main.go:141] libmachine: (custom-flannel-059272) Calling .GetConfigRaw
	I0318 14:08:48.438439 1117758 main.go:141] libmachine: Creating machine...
	I0318 14:08:48.438466 1117758 main.go:141] libmachine: (custom-flannel-059272) Calling .Create
	I0318 14:08:48.439210 1117758 main.go:141] libmachine: (custom-flannel-059272) Creating KVM machine...
	I0318 14:08:48.441002 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | found existing default KVM network
	I0318 14:08:48.441973 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:48.441759 1117797 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:39:5d:ea} reservation:<nil>}
	I0318 14:08:48.443088 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:48.442989 1117797 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00031e8b0}
	I0318 14:08:48.443108 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | created network xml: 
	I0318 14:08:48.443121 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | <network>
	I0318 14:08:48.443131 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |   <name>mk-custom-flannel-059272</name>
	I0318 14:08:48.443141 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |   <dns enable='no'/>
	I0318 14:08:48.443478 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |   
	I0318 14:08:48.443503 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0318 14:08:48.443518 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |     <dhcp>
	I0318 14:08:48.443554 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0318 14:08:48.443578 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |     </dhcp>
	I0318 14:08:48.443588 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |   </ip>
	I0318 14:08:48.443600 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG |   
	I0318 14:08:48.443609 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | </network>
	I0318 14:08:48.443617 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | 
	I0318 14:08:48.449655 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | trying to create private KVM network mk-custom-flannel-059272 192.168.50.0/24...
	I0318 14:08:48.548047 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | private KVM network mk-custom-flannel-059272 192.168.50.0/24 created
	I0318 14:08:48.548164 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272 ...
	I0318 14:08:48.548253 1117758 main.go:141] libmachine: (custom-flannel-059272) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:08:48.548301 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:48.548240 1117797 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:08:48.548504 1117758 main.go:141] libmachine: (custom-flannel-059272) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:08:48.835028 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:48.834889 1117797 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272/id_rsa...
	I0318 14:08:49.006573 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:49.006430 1117797 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272/custom-flannel-059272.rawdisk...
	I0318 14:08:49.006606 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Writing magic tar header
	I0318 14:08:49.006619 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Writing SSH key tar header
	I0318 14:08:49.006627 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:49.006567 1117797 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272 ...
	I0318 14:08:49.006723 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272
	I0318 14:08:49.006753 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272 (perms=drwx------)
	I0318 14:08:49.006765 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 14:08:49.006782 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:08:49.006797 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 14:08:49.006810 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:08:49.006825 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:08:49.006847 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 14:08:49.006865 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 14:08:49.006878 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:08:49.006890 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:08:49.006908 1117758 main.go:141] libmachine: (custom-flannel-059272) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:08:49.006925 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Checking permissions on dir: /home
	I0318 14:08:49.006937 1117758 main.go:141] libmachine: (custom-flannel-059272) Creating domain...
	I0318 14:08:49.006952 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | Skipping /home - not owner
	I0318 14:08:49.008345 1117758 main.go:141] libmachine: (custom-flannel-059272) define libvirt domain using xml: 
	I0318 14:08:49.008369 1117758 main.go:141] libmachine: (custom-flannel-059272) <domain type='kvm'>
	I0318 14:08:49.008379 1117758 main.go:141] libmachine: (custom-flannel-059272)   <name>custom-flannel-059272</name>
	I0318 14:08:49.008387 1117758 main.go:141] libmachine: (custom-flannel-059272)   <memory unit='MiB'>3072</memory>
	I0318 14:08:49.008394 1117758 main.go:141] libmachine: (custom-flannel-059272)   <vcpu>2</vcpu>
	I0318 14:08:49.008400 1117758 main.go:141] libmachine: (custom-flannel-059272)   <features>
	I0318 14:08:49.008408 1117758 main.go:141] libmachine: (custom-flannel-059272)     <acpi/>
	I0318 14:08:49.008415 1117758 main.go:141] libmachine: (custom-flannel-059272)     <apic/>
	I0318 14:08:49.008428 1117758 main.go:141] libmachine: (custom-flannel-059272)     <pae/>
	I0318 14:08:49.008435 1117758 main.go:141] libmachine: (custom-flannel-059272)     
	I0318 14:08:49.008448 1117758 main.go:141] libmachine: (custom-flannel-059272)   </features>
	I0318 14:08:49.008458 1117758 main.go:141] libmachine: (custom-flannel-059272)   <cpu mode='host-passthrough'>
	I0318 14:08:49.008469 1117758 main.go:141] libmachine: (custom-flannel-059272)   
	I0318 14:08:49.008476 1117758 main.go:141] libmachine: (custom-flannel-059272)   </cpu>
	I0318 14:08:49.008491 1117758 main.go:141] libmachine: (custom-flannel-059272)   <os>
	I0318 14:08:49.008498 1117758 main.go:141] libmachine: (custom-flannel-059272)     <type>hvm</type>
	I0318 14:08:49.008508 1117758 main.go:141] libmachine: (custom-flannel-059272)     <boot dev='cdrom'/>
	I0318 14:08:49.008519 1117758 main.go:141] libmachine: (custom-flannel-059272)     <boot dev='hd'/>
	I0318 14:08:49.008528 1117758 main.go:141] libmachine: (custom-flannel-059272)     <bootmenu enable='no'/>
	I0318 14:08:49.008538 1117758 main.go:141] libmachine: (custom-flannel-059272)   </os>
	I0318 14:08:49.008546 1117758 main.go:141] libmachine: (custom-flannel-059272)   <devices>
	I0318 14:08:49.008557 1117758 main.go:141] libmachine: (custom-flannel-059272)     <disk type='file' device='cdrom'>
	I0318 14:08:49.008575 1117758 main.go:141] libmachine: (custom-flannel-059272)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272/boot2docker.iso'/>
	I0318 14:08:49.008587 1117758 main.go:141] libmachine: (custom-flannel-059272)       <target dev='hdc' bus='scsi'/>
	I0318 14:08:49.008617 1117758 main.go:141] libmachine: (custom-flannel-059272)       <readonly/>
	I0318 14:08:49.008639 1117758 main.go:141] libmachine: (custom-flannel-059272)     </disk>
	I0318 14:08:49.008649 1117758 main.go:141] libmachine: (custom-flannel-059272)     <disk type='file' device='disk'>
	I0318 14:08:49.008663 1117758 main.go:141] libmachine: (custom-flannel-059272)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:08:49.008685 1117758 main.go:141] libmachine: (custom-flannel-059272)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/custom-flannel-059272/custom-flannel-059272.rawdisk'/>
	I0318 14:08:49.008697 1117758 main.go:141] libmachine: (custom-flannel-059272)       <target dev='hda' bus='virtio'/>
	I0318 14:08:49.008708 1117758 main.go:141] libmachine: (custom-flannel-059272)     </disk>
	I0318 14:08:49.008716 1117758 main.go:141] libmachine: (custom-flannel-059272)     <interface type='network'>
	I0318 14:08:49.008722 1117758 main.go:141] libmachine: (custom-flannel-059272)       <source network='mk-custom-flannel-059272'/>
	I0318 14:08:49.008734 1117758 main.go:141] libmachine: (custom-flannel-059272)       <model type='virtio'/>
	I0318 14:08:49.008746 1117758 main.go:141] libmachine: (custom-flannel-059272)     </interface>
	I0318 14:08:49.008755 1117758 main.go:141] libmachine: (custom-flannel-059272)     <interface type='network'>
	I0318 14:08:49.008768 1117758 main.go:141] libmachine: (custom-flannel-059272)       <source network='default'/>
	I0318 14:08:49.008779 1117758 main.go:141] libmachine: (custom-flannel-059272)       <model type='virtio'/>
	I0318 14:08:49.008790 1117758 main.go:141] libmachine: (custom-flannel-059272)     </interface>
	I0318 14:08:49.008798 1117758 main.go:141] libmachine: (custom-flannel-059272)     <serial type='pty'>
	I0318 14:08:49.008810 1117758 main.go:141] libmachine: (custom-flannel-059272)       <target port='0'/>
	I0318 14:08:49.008818 1117758 main.go:141] libmachine: (custom-flannel-059272)     </serial>
	I0318 14:08:49.008823 1117758 main.go:141] libmachine: (custom-flannel-059272)     <console type='pty'>
	I0318 14:08:49.008833 1117758 main.go:141] libmachine: (custom-flannel-059272)       <target type='serial' port='0'/>
	I0318 14:08:49.008847 1117758 main.go:141] libmachine: (custom-flannel-059272)     </console>
	I0318 14:08:49.008858 1117758 main.go:141] libmachine: (custom-flannel-059272)     <rng model='virtio'>
	I0318 14:08:49.008868 1117758 main.go:141] libmachine: (custom-flannel-059272)       <backend model='random'>/dev/random</backend>
	I0318 14:08:49.008878 1117758 main.go:141] libmachine: (custom-flannel-059272)     </rng>
	I0318 14:08:49.008886 1117758 main.go:141] libmachine: (custom-flannel-059272)     
	I0318 14:08:49.008896 1117758 main.go:141] libmachine: (custom-flannel-059272)     
	I0318 14:08:49.008904 1117758 main.go:141] libmachine: (custom-flannel-059272)   </devices>
	I0318 14:08:49.008911 1117758 main.go:141] libmachine: (custom-flannel-059272) </domain>
	I0318 14:08:49.008922 1117758 main.go:141] libmachine: (custom-flannel-059272) 
	I0318 14:08:49.013463 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:37:77:c6 in network default
	I0318 14:08:49.014126 1117758 main.go:141] libmachine: (custom-flannel-059272) Ensuring networks are active...
	I0318 14:08:49.014153 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:49.014870 1117758 main.go:141] libmachine: (custom-flannel-059272) Ensuring network default is active
	I0318 14:08:49.015277 1117758 main.go:141] libmachine: (custom-flannel-059272) Ensuring network mk-custom-flannel-059272 is active
	I0318 14:08:49.015974 1117758 main.go:141] libmachine: (custom-flannel-059272) Getting domain xml...
	I0318 14:08:49.016800 1117758 main.go:141] libmachine: (custom-flannel-059272) Creating domain...
	I0318 14:08:50.869468 1117758 main.go:141] libmachine: (custom-flannel-059272) Waiting to get IP...
	I0318 14:08:50.873011 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:50.873775 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | unable to find current IP address of domain custom-flannel-059272 in network mk-custom-flannel-059272
	I0318 14:08:50.873803 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:50.873648 1117797 retry.go:31] will retry after 242.787657ms: waiting for machine to come up
	I0318 14:08:51.118298 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:51.119183 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | unable to find current IP address of domain custom-flannel-059272 in network mk-custom-flannel-059272
	I0318 14:08:51.119209 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:51.119059 1117797 retry.go:31] will retry after 296.625636ms: waiting for machine to come up
	I0318 14:08:51.419246 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:51.420028 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | unable to find current IP address of domain custom-flannel-059272 in network mk-custom-flannel-059272
	I0318 14:08:51.420058 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:51.419929 1117797 retry.go:31] will retry after 322.42719ms: waiting for machine to come up
	I0318 14:08:51.743863 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:51.763644 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | unable to find current IP address of domain custom-flannel-059272 in network mk-custom-flannel-059272
	I0318 14:08:51.763691 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:51.763573 1117797 retry.go:31] will retry after 388.530995ms: waiting for machine to come up
	I0318 14:08:52.349735 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:52.350399 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | unable to find current IP address of domain custom-flannel-059272 in network mk-custom-flannel-059272
	I0318 14:08:52.350431 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:52.350354 1117797 retry.go:31] will retry after 693.585162ms: waiting for machine to come up
	I0318 14:08:53.046238 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | domain custom-flannel-059272 has defined MAC address 52:54:00:14:5b:c8 in network mk-custom-flannel-059272
	I0318 14:08:53.046888 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | unable to find current IP address of domain custom-flannel-059272 in network mk-custom-flannel-059272
	I0318 14:08:53.046935 1117758 main.go:141] libmachine: (custom-flannel-059272) DBG | I0318 14:08:53.046815 1117797 retry.go:31] will retry after 618.491745ms: waiting for machine to come up
	I0318 14:08:52.535975 1115945 api_server.go:279] https://192.168.61.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:08:52.536009 1115945 api_server.go:103] status: https://192.168.61.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:08:52.536024 1115945 api_server.go:253] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0318 14:08:52.620890 1115945 api_server.go:279] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:08:52.620930 1115945 api_server.go:103] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:08:52.833348 1115945 api_server.go:253] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0318 14:08:52.838854 1115945 api_server.go:279] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:08:52.838894 1115945 api_server.go:103] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:08:53.332426 1115945 api_server.go:253] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0318 14:08:53.337440 1115945 api_server.go:279] https://192.168.61.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:08:53.337495 1115945 api_server.go:103] status: https://192.168.61.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:08:53.833035 1115945 api_server.go:253] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0318 14:08:53.838968 1115945 api_server.go:279] https://192.168.61.54:8443/healthz returned 200:
	ok
	I0318 14:08:53.848523 1115945 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:08:53.848567 1115945 api_server.go:131] duration metric: took 4.516230551s to wait for apiserver health ...
	I0318 14:08:53.848579 1115945 cni.go:84] Creating CNI manager for ""
	I0318 14:08:53.848588 1115945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:08:53.850723 1115945 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:08:53.852160 1115945 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:08:53.865883 1115945 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:08:53.893301 1115945 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:08:53.904541 1115945 system_pods.go:59] 3 kube-system pods found
	I0318 14:08:53.904649 1115945 system_pods.go:61] "etcd-kubernetes-upgrade-140251" [0652d8f2-2a0b-47e9-bdb9-745a40bcc270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:08:53.904681 1115945 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-140251" [8090a2e6-3b73-4ca4-9dc7-847dde275a18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:08:53.904719 1115945 system_pods.go:61] "storage-provisioner" [9fa6b452-e546-4eae-84e8-8f1184c00376] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0318 14:08:53.904749 1115945 system_pods.go:74] duration metric: took 11.419948ms to wait for pod list to return data ...
	I0318 14:08:53.904772 1115945 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:08:53.909339 1115945 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:08:53.909372 1115945 node_conditions.go:123] node cpu capacity is 2
	I0318 14:08:53.909384 1115945 node_conditions.go:105] duration metric: took 4.594951ms to run NodePressure ...
	I0318 14:08:53.909406 1115945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:08:54.202011 1115945 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:08:54.217848 1115945 ops.go:34] apiserver oom_adj: -16
	I0318 14:08:54.217889 1115945 kubeadm.go:591] duration metric: took 8.323885673s to restartPrimaryControlPlane
	I0318 14:08:54.217901 1115945 kubeadm.go:393] duration metric: took 8.477042037s to StartCluster
	I0318 14:08:54.217923 1115945 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:08:54.218019 1115945 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:08:54.218997 1115945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:08:54.219278 1115945 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:08:54.220889 1115945 out.go:177] * Verifying Kubernetes components...
	I0318 14:08:54.219390 1115945 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:08:54.219541 1115945 config.go:182] Loaded profile config "kubernetes-upgrade-140251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:08:54.222484 1115945 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-140251"
	I0318 14:08:54.222513 1115945 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-140251"
	I0318 14:08:54.222523 1115945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:08:54.222530 1115945 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-140251"
	W0318 14:08:54.222539 1115945 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:08:54.222560 1115945 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-140251"
	I0318 14:08:54.222571 1115945 host.go:66] Checking if "kubernetes-upgrade-140251" exists ...
	I0318 14:08:54.222867 1115945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:54.222899 1115945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:54.222967 1115945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:54.222978 1115945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:54.241452 1115945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0318 14:08:54.242198 1115945 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:54.242832 1115945 main.go:141] libmachine: Using API Version  1
	I0318 14:08:54.242862 1115945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:54.243343 1115945 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:54.244000 1115945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:54.244046 1115945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:54.246434 1115945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42405
	I0318 14:08:54.247099 1115945 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:54.247620 1115945 main.go:141] libmachine: Using API Version  1
	I0318 14:08:54.247640 1115945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:54.248078 1115945 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:54.248369 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetState
	I0318 14:08:54.251464 1115945 kapi.go:59] client config for kubernetes-upgrade-140251: &rest.Config{Host:"https://192.168.61.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.crt", KeyFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kubernetes-upgrade-140251/client.key", CAFile:"/home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 14:08:54.251815 1115945 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-140251"
	W0318 14:08:54.251860 1115945 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:08:54.251894 1115945 host.go:66] Checking if "kubernetes-upgrade-140251" exists ...
	I0318 14:08:54.252289 1115945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:54.252336 1115945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:54.265589 1115945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0318 14:08:54.266162 1115945 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:54.266669 1115945 main.go:141] libmachine: Using API Version  1
	I0318 14:08:54.266696 1115945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:54.268434 1115945 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:54.268706 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetState
	I0318 14:08:54.270832 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:08:54.273110 1115945 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:08:54.274805 1115945 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:08:54.274823 1115945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:08:54.274846 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:08:54.275822 1115945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0318 14:08:54.276622 1115945 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:54.277183 1115945 main.go:141] libmachine: Using API Version  1
	I0318 14:08:54.277200 1115945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:54.277627 1115945 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:54.278254 1115945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:54.278294 1115945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:54.278680 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:08:54.280317 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:08:06 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:08:54.280344 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:08:54.280618 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:08:54.280800 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:08:54.280930 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:08:54.281076 1115945 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa Username:docker}
	I0318 14:08:54.302129 1115945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42473
	I0318 14:08:54.302667 1115945 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:54.303312 1115945 main.go:141] libmachine: Using API Version  1
	I0318 14:08:54.303332 1115945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:54.303676 1115945 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:54.303925 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetState
	I0318 14:08:54.306286 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .DriverName
	I0318 14:08:54.306605 1115945 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:08:54.306619 1115945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:08:54.306642 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHHostname
	I0318 14:08:54.310020 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:08:54.310289 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9c:c2", ip: ""} in network mk-kubernetes-upgrade-140251: {Iface:virbr2 ExpiryTime:2024-03-18 15:08:06 +0000 UTC Type:0 Mac:52:54:00:10:9c:c2 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:kubernetes-upgrade-140251 Clientid:01:52:54:00:10:9c:c2}
	I0318 14:08:54.310309 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | domain kubernetes-upgrade-140251 has defined IP address 192.168.61.54 and MAC address 52:54:00:10:9c:c2 in network mk-kubernetes-upgrade-140251
	I0318 14:08:54.310605 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHPort
	I0318 14:08:54.310798 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHKeyPath
	I0318 14:08:54.311019 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .GetSSHUsername
	I0318 14:08:54.311168 1115945 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/kubernetes-upgrade-140251/id_rsa Username:docker}
	I0318 14:08:54.406062 1115945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:08:54.429767 1115945 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:08:54.429862 1115945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:08:54.452714 1115945 api_server.go:72] duration metric: took 233.3941ms to wait for apiserver process to appear ...
	I0318 14:08:54.452748 1115945 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:08:54.452779 1115945 api_server.go:253] Checking apiserver healthz at https://192.168.61.54:8443/healthz ...
	I0318 14:08:54.458761 1115945 api_server.go:279] https://192.168.61.54:8443/healthz returned 200:
	ok
	I0318 14:08:54.460341 1115945 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:08:54.460380 1115945 api_server.go:131] duration metric: took 7.623564ms to wait for apiserver health ...
	I0318 14:08:54.460393 1115945 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:08:54.465500 1115945 system_pods.go:59] 3 kube-system pods found
	I0318 14:08:54.465544 1115945 system_pods.go:61] "etcd-kubernetes-upgrade-140251" [0652d8f2-2a0b-47e9-bdb9-745a40bcc270] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:08:54.465557 1115945 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-140251" [8090a2e6-3b73-4ca4-9dc7-847dde275a18] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:08:54.465569 1115945 system_pods.go:61] "storage-provisioner" [9fa6b452-e546-4eae-84e8-8f1184c00376] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0318 14:08:54.465581 1115945 system_pods.go:74] duration metric: took 5.17878ms to wait for pod list to return data ...
	I0318 14:08:54.465595 1115945 kubeadm.go:576] duration metric: took 246.285002ms to wait for: map[apiserver:true system_pods:true]
	I0318 14:08:54.465615 1115945 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:08:54.469599 1115945 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:08:54.469630 1115945 node_conditions.go:123] node cpu capacity is 2
	I0318 14:08:54.469644 1115945 node_conditions.go:105] duration metric: took 4.020534ms to run NodePressure ...
	I0318 14:08:54.469658 1115945 start.go:240] waiting for startup goroutines ...
	I0318 14:08:54.518310 1115945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:08:54.540440 1115945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:08:55.433773 1115945 main.go:141] libmachine: Making call to close driver server
	I0318 14:08:55.433808 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .Close
	I0318 14:08:55.433901 1115945 main.go:141] libmachine: Making call to close driver server
	I0318 14:08:55.433917 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .Close
	I0318 14:08:55.434266 1115945 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:08:55.434337 1115945 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:08:55.434358 1115945 main.go:141] libmachine: Making call to close driver server
	I0318 14:08:55.434379 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .Close
	I0318 14:08:55.436047 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Closing plugin on server side
	I0318 14:08:55.436076 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) DBG | Closing plugin on server side
	I0318 14:08:55.436082 1115945 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:08:55.436103 1115945 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:08:55.436109 1115945 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:08:55.436119 1115945 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:08:55.436127 1115945 main.go:141] libmachine: Making call to close driver server
	I0318 14:08:55.436137 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .Close
	I0318 14:08:55.436840 1115945 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:08:55.436857 1115945 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:08:55.444835 1115945 main.go:141] libmachine: Making call to close driver server
	I0318 14:08:55.444862 1115945 main.go:141] libmachine: (kubernetes-upgrade-140251) Calling .Close
	I0318 14:08:55.445209 1115945 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:08:55.445245 1115945 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:08:55.447355 1115945 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 14:08:55.448989 1115945 addons.go:505] duration metric: took 1.229596827s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 14:08:55.449038 1115945 start.go:245] waiting for cluster config update ...
	I0318 14:08:55.449054 1115945 start.go:254] writing updated cluster config ...
	I0318 14:08:55.449351 1115945 ssh_runner.go:195] Run: rm -f paused
	I0318 14:08:55.508549 1115945 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:08:55.511607 1115945 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-140251" cluster and "default" namespace by default
	I0318 14:08:52.235874 1114400 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-mplzh" in "kube-system" namespace has status "Ready":"False"
	I0318 14:08:54.737696 1114400 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-mplzh" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.416746845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770936416713037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5a13628-d6dd-4035-8714-0beb1e5e7b16 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.417502460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32b8e033-34e7-4edc-86bd-808c1e02f4dd name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.417560908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32b8e033-34e7-4edc-86bd-808c1e02f4dd name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.417812698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1641f2b4b40b457db4bd3acd15eb02dbe97b48f9570fadbaa293d8e52045b97,PodSandboxId:94f17a190af512e0c5a31153e2ebcadf0915b57625215dc1e1d033de2225d439,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770928870696483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e7ff69460fb4a36e6697c6cfaace474f4ea1137d9913e06d145b1512a6baee,PodSandboxId:4afecc4c8baa91eb836d22eb9d29e5dbdc553697194ff1d54ded51af52f72911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770928890598262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6633f2fd97914f6be3127266d7a7a5e15582a9d810a4135cb56cf41bbd355aea,PodSandboxId:2d9dd16ef775f617ef535d5bb3c0b2551c2dd2e42d12205b6aee70a024418ce6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770928839628230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f5431bad1201ac51a47f5c1a8f7fdbe0c252cf6407ac5e35da55460e766699,PodSandboxId:669f2abad0fdca197b6ff6f46e01fef4a09a18901aabb6ed5e8f853b2906c0e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770928823242778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb47b39b054e873bebbe0febe76c029c7391afe4dbd2d4efa2bb0de51c510631,PodSandboxId:f20c86e32322057e457bdfc6e6722cef20387e3d38b563ecdd2bce12e2908239,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710770921086818988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f97ca85960cb76b4579a22a387bfc690ac3d1bf23ad40ae1d054da3eebcd1957,PodSandboxId:3293229580c3f703b1e03afccfa92277db9aec9fdcf960cd7df6349d02574382,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710770920990932982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7506f3f9c36c41b93d50aa7c78c6f5d0067f9a44a04f47e51a9aa264a7983bc2,PodSandboxId:ecb56331c1fb22817ebb6975118e467b7a1d1657b4437de3475ae7186332a3ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710770920892878847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2910d0aa0fd6597a38db2e36211eacddbefec051af25b325ca8a46f34a091203,PodSandboxId:0a16b0769156eff9d2cac03664cd9f5b84d072e66bcb8379fc6d63e2eba9a514,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710770920792625436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32b8e033-34e7-4edc-86bd-808c1e02f4dd name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.458993845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7cf43af-ac18-4427-b3ba-ff8c2682303c name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.459140873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7cf43af-ac18-4427-b3ba-ff8c2682303c name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.459998784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee6058c7-8b19-4b6c-90b3-f1df79aa8f64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.460576371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770936460553135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee6058c7-8b19-4b6c-90b3-f1df79aa8f64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.461143471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d71c83e3-9dae-4f57-9a9f-d18f21d56fea name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.461219035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d71c83e3-9dae-4f57-9a9f-d18f21d56fea name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.461426378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1641f2b4b40b457db4bd3acd15eb02dbe97b48f9570fadbaa293d8e52045b97,PodSandboxId:94f17a190af512e0c5a31153e2ebcadf0915b57625215dc1e1d033de2225d439,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770928870696483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e7ff69460fb4a36e6697c6cfaace474f4ea1137d9913e06d145b1512a6baee,PodSandboxId:4afecc4c8baa91eb836d22eb9d29e5dbdc553697194ff1d54ded51af52f72911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770928890598262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6633f2fd97914f6be3127266d7a7a5e15582a9d810a4135cb56cf41bbd355aea,PodSandboxId:2d9dd16ef775f617ef535d5bb3c0b2551c2dd2e42d12205b6aee70a024418ce6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770928839628230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f5431bad1201ac51a47f5c1a8f7fdbe0c252cf6407ac5e35da55460e766699,PodSandboxId:669f2abad0fdca197b6ff6f46e01fef4a09a18901aabb6ed5e8f853b2906c0e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770928823242778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb47b39b054e873bebbe0febe76c029c7391afe4dbd2d4efa2bb0de51c510631,PodSandboxId:f20c86e32322057e457bdfc6e6722cef20387e3d38b563ecdd2bce12e2908239,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710770921086818988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f97ca85960cb76b4579a22a387bfc690ac3d1bf23ad40ae1d054da3eebcd1957,PodSandboxId:3293229580c3f703b1e03afccfa92277db9aec9fdcf960cd7df6349d02574382,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710770920990932982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7506f3f9c36c41b93d50aa7c78c6f5d0067f9a44a04f47e51a9aa264a7983bc2,PodSandboxId:ecb56331c1fb22817ebb6975118e467b7a1d1657b4437de3475ae7186332a3ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710770920892878847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2910d0aa0fd6597a38db2e36211eacddbefec051af25b325ca8a46f34a091203,PodSandboxId:0a16b0769156eff9d2cac03664cd9f5b84d072e66bcb8379fc6d63e2eba9a514,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710770920792625436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d71c83e3-9dae-4f57-9a9f-d18f21d56fea name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.522824051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da0ee1ca-d1a5-4c56-b720-183dfa4e8f32 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.523185461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da0ee1ca-d1a5-4c56-b720-183dfa4e8f32 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.524352043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53095cc6-5ae2-4eeb-a045-6adacf990bba name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.524709262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770936524690312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53095cc6-5ae2-4eeb-a045-6adacf990bba name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.525286845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0e06c35-380b-4daa-8990-0a97129fd8eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.525343832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0e06c35-380b-4daa-8990-0a97129fd8eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.525558839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1641f2b4b40b457db4bd3acd15eb02dbe97b48f9570fadbaa293d8e52045b97,PodSandboxId:94f17a190af512e0c5a31153e2ebcadf0915b57625215dc1e1d033de2225d439,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770928870696483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e7ff69460fb4a36e6697c6cfaace474f4ea1137d9913e06d145b1512a6baee,PodSandboxId:4afecc4c8baa91eb836d22eb9d29e5dbdc553697194ff1d54ded51af52f72911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770928890598262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6633f2fd97914f6be3127266d7a7a5e15582a9d810a4135cb56cf41bbd355aea,PodSandboxId:2d9dd16ef775f617ef535d5bb3c0b2551c2dd2e42d12205b6aee70a024418ce6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770928839628230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f5431bad1201ac51a47f5c1a8f7fdbe0c252cf6407ac5e35da55460e766699,PodSandboxId:669f2abad0fdca197b6ff6f46e01fef4a09a18901aabb6ed5e8f853b2906c0e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770928823242778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb47b39b054e873bebbe0febe76c029c7391afe4dbd2d4efa2bb0de51c510631,PodSandboxId:f20c86e32322057e457bdfc6e6722cef20387e3d38b563ecdd2bce12e2908239,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710770921086818988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f97ca85960cb76b4579a22a387bfc690ac3d1bf23ad40ae1d054da3eebcd1957,PodSandboxId:3293229580c3f703b1e03afccfa92277db9aec9fdcf960cd7df6349d02574382,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710770920990932982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7506f3f9c36c41b93d50aa7c78c6f5d0067f9a44a04f47e51a9aa264a7983bc2,PodSandboxId:ecb56331c1fb22817ebb6975118e467b7a1d1657b4437de3475ae7186332a3ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710770920892878847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2910d0aa0fd6597a38db2e36211eacddbefec051af25b325ca8a46f34a091203,PodSandboxId:0a16b0769156eff9d2cac03664cd9f5b84d072e66bcb8379fc6d63e2eba9a514,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710770920792625436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0e06c35-380b-4daa-8990-0a97129fd8eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.564115364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8420cd11-fb74-46e6-8e5b-f52bb19ef896 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.564195665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8420cd11-fb74-46e6-8e5b-f52bb19ef896 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.565236313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=895ae816-46da-4c0a-8501-0771b57d2357 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.565577737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770936565558381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=895ae816-46da-4c0a-8501-0771b57d2357 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.566410746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2dc845f-fcfa-4ee9-9622-abe169b239ad name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.566458803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2dc845f-fcfa-4ee9-9622-abe169b239ad name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:56 kubernetes-upgrade-140251 crio[1848]: time="2024-03-18 14:08:56.566649378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1641f2b4b40b457db4bd3acd15eb02dbe97b48f9570fadbaa293d8e52045b97,PodSandboxId:94f17a190af512e0c5a31153e2ebcadf0915b57625215dc1e1d033de2225d439,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770928870696483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e7ff69460fb4a36e6697c6cfaace474f4ea1137d9913e06d145b1512a6baee,PodSandboxId:4afecc4c8baa91eb836d22eb9d29e5dbdc553697194ff1d54ded51af52f72911,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770928890598262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6633f2fd97914f6be3127266d7a7a5e15582a9d810a4135cb56cf41bbd355aea,PodSandboxId:2d9dd16ef775f617ef535d5bb3c0b2551c2dd2e42d12205b6aee70a024418ce6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770928839628230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f5431bad1201ac51a47f5c1a8f7fdbe0c252cf6407ac5e35da55460e766699,PodSandboxId:669f2abad0fdca197b6ff6f46e01fef4a09a18901aabb6ed5e8f853b2906c0e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770928823242778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb47b39b054e873bebbe0febe76c029c7391afe4dbd2d4efa2bb0de51c510631,PodSandboxId:f20c86e32322057e457bdfc6e6722cef20387e3d38b563ecdd2bce12e2908239,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710770921086818988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58017165b7fb5bfea42b12c6183f028,},Annotations:map[string]string{io.kubernetes.container.hash: f3ebb8c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f97ca85960cb76b4579a22a387bfc690ac3d1bf23ad40ae1d054da3eebcd1957,PodSandboxId:3293229580c3f703b1e03afccfa92277db9aec9fdcf960cd7df6349d02574382,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710770920990932982,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff441e20ca18fe912d098d21ea1ce57,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7506f3f9c36c41b93d50aa7c78c6f5d0067f9a44a04f47e51a9aa264a7983bc2,PodSandboxId:ecb56331c1fb22817ebb6975118e467b7a1d1657b4437de3475ae7186332a3ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710770920892878847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd35877095d104655c40c62f1494075,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2910d0aa0fd6597a38db2e36211eacddbefec051af25b325ca8a46f34a091203,PodSandboxId:0a16b0769156eff9d2cac03664cd9f5b84d072e66bcb8379fc6d63e2eba9a514,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710770920792625436,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-140251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9cf2884337d0c76e7d4d4471ce74053,},Annotations:map[string]string{io.kubernetes.container.hash: 2882efc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2dc845f-fcfa-4ee9-9622-abe169b239ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72e7ff69460fb       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   7 seconds ago       Running             kube-controller-manager   2                   4afecc4c8baa9       kube-controller-manager-kubernetes-upgrade-140251
	d1641f2b4b40b       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   7 seconds ago       Running             etcd                      2                   94f17a190af51       etcd-kubernetes-upgrade-140251
	6633f2fd97914       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   7 seconds ago       Running             kube-scheduler            2                   2d9dd16ef775f       kube-scheduler-kubernetes-upgrade-140251
	e5f5431bad120       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   7 seconds ago       Running             kube-apiserver            2                   669f2abad0fdc       kube-apiserver-kubernetes-upgrade-140251
	fb47b39b054e8       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   15 seconds ago      Exited              etcd                      1                   f20c86e323220       etcd-kubernetes-upgrade-140251
	f97ca85960cb7       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   15 seconds ago      Exited              kube-scheduler            1                   3293229580c3f       kube-scheduler-kubernetes-upgrade-140251
	7506f3f9c36c4       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   15 seconds ago      Exited              kube-controller-manager   1                   ecb56331c1fb2       kube-controller-manager-kubernetes-upgrade-140251
	2910d0aa0fd65       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   15 seconds ago      Exited              kube-apiserver            1                   0a16b0769156e       kube-apiserver-kubernetes-upgrade-140251
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-140251
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-140251
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:08:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-140251
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:08:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:08:52 +0000   Mon, 18 Mar 2024 14:08:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:08:52 +0000   Mon, 18 Mar 2024 14:08:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:08:52 +0000   Mon, 18 Mar 2024 14:08:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:08:52 +0000   Mon, 18 Mar 2024 14:08:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.54
	  Hostname:    kubernetes-upgrade-140251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b93f782717e4620840abc7b437a2b5d
	  System UUID:                7b93f782-717e-4620-840a-bc7b437a2b5d
	  Boot ID:                    00f258f7-fff4-4ccc-aa55-49e44f83779d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-140251                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-140251    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                300m (15%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 30s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet  Node kubernetes-upgrade-140251 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet  Node kubernetes-upgrade-140251 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)  kubelet  Node kubernetes-upgrade-140251 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.667576] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.514548] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.078029] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077969] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.188280] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.188982] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.340016] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.634379] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.067402] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.750109] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[ +10.208953] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.107111] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.460283] systemd-fstab-generator[1771]: Ignoring "noauto" option for root device
	[  +0.138617] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.082042] systemd-fstab-generator[1784]: Ignoring "noauto" option for root device
	[  +0.240869] systemd-fstab-generator[1798]: Ignoring "noauto" option for root device
	[  +0.206733] systemd-fstab-generator[1810]: Ignoring "noauto" option for root device
	[  +0.415547] systemd-fstab-generator[1836]: Ignoring "noauto" option for root device
	[  +1.836851] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +3.041929] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.102272] kauditd_printk_skb: 146 callbacks suppressed
	[  +6.308126] systemd-fstab-generator[2555]: Ignoring "noauto" option for root device
	[  +0.098750] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [d1641f2b4b40b457db4bd3acd15eb02dbe97b48f9570fadbaa293d8e52045b97] <==
	{"level":"info","ts":"2024-03-18T14:08:49.454554Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:08:49.454564Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:08:49.45473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e switched to configuration voters=(12430535640657037982)"}
	{"level":"info","ts":"2024-03-18T14:08:49.454796Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","added-peer-id":"ac82224e2d320a9e","added-peer-peer-urls":["https://192.168.61.54:2380"]}
	{"level":"info","ts":"2024-03-18T14:08:49.454921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:08:49.454965Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:08:49.471748Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T14:08:49.4756Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ac82224e2d320a9e","initial-advertise-peer-urls":["https://192.168.61.54:2380"],"listen-peer-urls":["https://192.168.61.54:2380"],"advertise-client-urls":["https://192.168.61.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T14:08:49.478113Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T14:08:49.475235Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.54:2380"}
	{"level":"info","ts":"2024-03-18T14:08:49.478348Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.54:2380"}
	{"level":"info","ts":"2024-03-18T14:08:50.515167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T14:08:50.515252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T14:08:50.515341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e received MsgPreVoteResp from ac82224e2d320a9e at term 2"}
	{"level":"info","ts":"2024-03-18T14:08:50.515405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T14:08:50.51542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e received MsgVoteResp from ac82224e2d320a9e at term 3"}
	{"level":"info","ts":"2024-03-18T14:08:50.51544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e became leader at term 3"}
	{"level":"info","ts":"2024-03-18T14:08:50.5155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac82224e2d320a9e elected leader ac82224e2d320a9e at term 3"}
	{"level":"info","ts":"2024-03-18T14:08:50.521703Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ac82224e2d320a9e","local-member-attributes":"{Name:kubernetes-upgrade-140251 ClientURLs:[https://192.168.61.54:2379]}","request-path":"/0/members/ac82224e2d320a9e/attributes","cluster-id":"f6d71e843b8adcd6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:08:50.521979Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:08:50.528099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:08:50.531905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.54:2379"}
	{"level":"info","ts":"2024-03-18T14:08:50.5389Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:08:50.538984Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:08:50.54269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [fb47b39b054e873bebbe0febe76c029c7391afe4dbd2d4efa2bb0de51c510631] <==
	{"level":"info","ts":"2024-03-18T14:08:41.882Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"85.026379ms"}
	{"level":"info","ts":"2024-03-18T14:08:41.918866Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-18T14:08:41.920974Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","commit-index":280}
	{"level":"info","ts":"2024-03-18T14:08:41.923524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-18T14:08:41.923743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e became follower at term 2"}
	{"level":"info","ts":"2024-03-18T14:08:41.92376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ac82224e2d320a9e [peers: [], term: 2, commit: 280, applied: 0, lastindex: 280, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-18T14:08:41.935123Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-03-18T14:08:41.942204Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":273}
	{"level":"info","ts":"2024-03-18T14:08:41.952159Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-18T14:08:41.96139Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ac82224e2d320a9e","timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:08:41.961921Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ac82224e2d320a9e"}
	{"level":"info","ts":"2024-03-18T14:08:41.96202Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ac82224e2d320a9e","local-server-version":"3.5.10","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-18T14:08:41.96471Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-18T14:08:41.96503Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:08:41.965243Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:08:41.965277Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:08:41.965832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac82224e2d320a9e switched to configuration voters=(12430535640657037982)"}
	{"level":"info","ts":"2024-03-18T14:08:41.965964Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","added-peer-id":"ac82224e2d320a9e","added-peer-peer-urls":["https://192.168.61.54:2380"]}
	{"level":"info","ts":"2024-03-18T14:08:41.968203Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f6d71e843b8adcd6","local-member-id":"ac82224e2d320a9e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:08:41.968258Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:08:41.983333Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T14:08:41.98364Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ac82224e2d320a9e","initial-advertise-peer-urls":["https://192.168.61.54:2380"],"listen-peer-urls":["https://192.168.61.54:2380"],"advertise-client-urls":["https://192.168.61.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T14:08:41.98371Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T14:08:41.983806Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.54:2380"}
	{"level":"info","ts":"2024-03-18T14:08:41.983842Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.54:2380"}
	
	
	==> kernel <==
	 14:08:56 up 0 min,  0 users,  load average: 1.55, 0.42, 0.14
	Linux kubernetes-upgrade-140251 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2910d0aa0fd6597a38db2e36211eacddbefec051af25b325ca8a46f34a091203] <==
	I0318 14:08:41.384137       1 options.go:222] external host was not specified, using 192.168.61.54
	I0318 14:08:41.416183       1 server.go:148] Version: v1.29.0-rc.2
	I0318 14:08:41.416263       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [e5f5431bad1201ac51a47f5c1a8f7fdbe0c252cf6407ac5e35da55460e766699] <==
	I0318 14:08:52.487201       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0318 14:08:52.495553       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 14:08:52.495658       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 14:08:52.496288       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 14:08:52.496298       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 14:08:52.596944       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 14:08:52.605855       1 aggregator.go:165] initial CRD sync complete...
	I0318 14:08:52.605936       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 14:08:52.605961       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 14:08:52.605985       1 cache.go:39] Caches are synced for autoregister controller
	I0318 14:08:52.622211       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 14:08:52.674827       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 14:08:52.681120       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 14:08:52.689471       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0318 14:08:52.689536       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0318 14:08:52.713172       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 14:08:52.713381       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 14:08:52.715350       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0318 14:08:52.744795       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 14:08:53.486469       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 14:08:54.049202       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 14:08:54.065240       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 14:08:54.104762       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 14:08:54.143631       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 14:08:54.154431       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [72e7ff69460fb4a36e6697c6cfaace474f4ea1137d9913e06d145b1512a6baee] <==
	W0318 14:08:56.502789       1 shared_informer.go:591] resyncPeriod 19h23m47.111018869s is smaller than resyncCheckPeriod 23h59m57.466264726s and the informer has already started. Changing it to 23h59m57.466264726s
	I0318 14:08:56.502858       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	W0318 14:08:56.502887       1 shared_informer.go:591] resyncPeriod 22h11m30.008771233s is smaller than resyncCheckPeriod 23h59m57.466264726s and the informer has already started. Changing it to 23h59m57.466264726s
	I0318 14:08:56.502958       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 14:08:56.502985       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 14:08:56.503006       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 14:08:56.503017       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 14:08:56.503079       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 14:08:56.503099       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 14:08:56.503116       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 14:08:56.503134       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 14:08:56.503148       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 14:08:56.503164       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 14:08:56.503204       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 14:08:56.503218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 14:08:56.503232       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 14:08:56.503288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 14:08:56.503325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 14:08:56.503364       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0318 14:08:56.503420       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 14:08:56.503456       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 14:08:56.503483       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 14:08:56.648723       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0318 14:08:56.648882       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 14:08:56.648892       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	
	
	==> kube-controller-manager [7506f3f9c36c41b93d50aa7c78c6f5d0067f9a44a04f47e51a9aa264a7983bc2] <==
	I0318 14:08:42.410719       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [6633f2fd97914f6be3127266d7a7a5e15582a9d810a4135cb56cf41bbd355aea] <==
	I0318 14:08:50.635707       1 serving.go:380] Generated self-signed cert in-memory
	W0318 14:08:52.538508       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 14:08:52.539119       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 14:08:52.539193       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 14:08:52.539259       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 14:08:52.624720       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0318 14:08:52.624802       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:08:52.631711       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 14:08:52.632088       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 14:08:52.632189       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 14:08:52.632279       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 14:08:52.732658       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f97ca85960cb76b4579a22a387bfc690ac3d1bf23ad40ae1d054da3eebcd1957] <==
	
	
	==> kubelet <==
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.574969    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c58017165b7fb5bfea42b12c6183f028-etcd-data\") pod \"etcd-kubernetes-upgrade-140251\" (UID: \"c58017165b7fb5bfea42b12c6183f028\") " pod="kube-system/etcd-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575001    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9cf2884337d0c76e7d4d4471ce74053-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-140251\" (UID: \"e9cf2884337d0c76e7d4d4471ce74053\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575033    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9cf2884337d0c76e7d4d4471ce74053-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-140251\" (UID: \"e9cf2884337d0c76e7d4d4471ce74053\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575166    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bd35877095d104655c40c62f1494075-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-140251\" (UID: \"9bd35877095d104655c40c62f1494075\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575212    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bd35877095d104655c40c62f1494075-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-140251\" (UID: \"9bd35877095d104655c40c62f1494075\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575244    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9cf2884337d0c76e7d4d4471ce74053-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-140251\" (UID: \"e9cf2884337d0c76e7d4d4471ce74053\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575273    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bd35877095d104655c40c62f1494075-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-140251\" (UID: \"9bd35877095d104655c40c62f1494075\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575319    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9bd35877095d104655c40c62f1494075-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-140251\" (UID: \"9bd35877095d104655c40c62f1494075\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.575355    2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bd35877095d104655c40c62f1494075-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-140251\" (UID: \"9bd35877095d104655c40c62f1494075\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: E0318 14:08:48.779822    2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-140251?timeout=10s\": dial tcp 192.168.61.54:8443: connect: connection refused" interval="800ms"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.796429    2292 scope.go:117] "RemoveContainer" containerID="2910d0aa0fd6597a38db2e36211eacddbefec051af25b325ca8a46f34a091203"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.796945    2292 scope.go:117] "RemoveContainer" containerID="7506f3f9c36c41b93d50aa7c78c6f5d0067f9a44a04f47e51a9aa264a7983bc2"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.798674    2292 scope.go:117] "RemoveContainer" containerID="f97ca85960cb76b4579a22a387bfc690ac3d1bf23ad40ae1d054da3eebcd1957"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.803904    2292 scope.go:117] "RemoveContainer" containerID="fb47b39b054e873bebbe0febe76c029c7391afe4dbd2d4efa2bb0de51c510631"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:48.881923    2292 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-140251"
	Mar 18 14:08:48 kubernetes-upgrade-140251 kubelet[2292]: E0318 14:08:48.888726    2292 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.54:8443: connect: connection refused" node="kubernetes-upgrade-140251"
	Mar 18 14:08:49 kubernetes-upgrade-140251 kubelet[2292]: W0318 14:08:49.200847    2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.54:8443: connect: connection refused
	Mar 18 14:08:49 kubernetes-upgrade-140251 kubelet[2292]: E0318 14:08:49.200971    2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.54:8443: connect: connection refused
	Mar 18 14:08:49 kubernetes-upgrade-140251 kubelet[2292]: W0318 14:08:49.207003    2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.54:8443: connect: connection refused
	Mar 18 14:08:49 kubernetes-upgrade-140251 kubelet[2292]: E0318 14:08:49.207201    2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.54:8443: connect: connection refused
	Mar 18 14:08:49 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:49.690440    2292 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-140251"
	Mar 18 14:08:52 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:52.679763    2292 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-140251"
	Mar 18 14:08:52 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:52.680304    2292 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-140251"
	Mar 18 14:08:53 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:53.140771    2292 apiserver.go:52] "Watching apiserver"
	Mar 18 14:08:53 kubernetes-upgrade-140251 kubelet[2292]: I0318 14:08:53.173906    2292 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-140251 -n kubernetes-upgrade-140251
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-140251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-140251 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-140251 describe pod storage-provisioner: exit status 1 (70.409378ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-140251 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-140251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-140251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-140251: (1.147278499s)
--- FAIL: TestKubernetesUpgrade (353.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-782728 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-782728 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m42.762345995s)

                                                
                                                
-- stdout --
	* [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:10:43.147133 1122305 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:10:43.147420 1122305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:10:43.147431 1122305 out.go:304] Setting ErrFile to fd 2...
	I0318 14:10:43.147436 1122305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:10:43.147620 1122305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:10:43.148364 1122305 out.go:298] Setting JSON to false
	I0318 14:10:43.149569 1122305 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21190,"bootTime":1710749853,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:10:43.149638 1122305 start.go:139] virtualization: kvm guest
	I0318 14:10:43.152003 1122305 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:10:43.153668 1122305 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:10:43.153727 1122305 notify.go:220] Checking for updates...
	I0318 14:10:43.155255 1122305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:10:43.156990 1122305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:10:43.158685 1122305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:10:43.160353 1122305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:10:43.162047 1122305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:10:43.164146 1122305 config.go:182] Loaded profile config "bridge-059272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:43.164289 1122305 config.go:182] Loaded profile config "enable-default-cni-059272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:43.164414 1122305 config.go:182] Loaded profile config "flannel-059272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:43.164585 1122305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:10:43.206552 1122305 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:10:43.208025 1122305 start.go:297] selected driver: kvm2
	I0318 14:10:43.208045 1122305 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:10:43.208055 1122305 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:10:43.208956 1122305 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:10:43.209034 1122305 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:10:43.228057 1122305 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:10:43.228135 1122305 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 14:10:43.228468 1122305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:10:43.228575 1122305 cni.go:84] Creating CNI manager for ""
	I0318 14:10:43.228594 1122305 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:10:43.228613 1122305 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 14:10:43.228690 1122305 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:10:43.228814 1122305 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:10:43.231681 1122305 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:10:43.233181 1122305 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:10:43.233247 1122305 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:10:43.233258 1122305 cache.go:56] Caching tarball of preloaded images
	I0318 14:10:43.233392 1122305 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:10:43.233409 1122305 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:10:43.233556 1122305 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:10:43.233584 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json: {Name:mke65fda3c9c54396e82d829c7a663cae165c9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:43.233804 1122305 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:10:43.233867 1122305 start.go:364] duration metric: took 30.257µs to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:10:43.233892 1122305 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:10:43.233990 1122305 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 14:10:43.235764 1122305 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 14:10:43.235967 1122305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:43.236019 1122305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:43.253600 1122305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0318 14:10:43.254181 1122305 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:43.254978 1122305 main.go:141] libmachine: Using API Version  1
	I0318 14:10:43.255006 1122305 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:43.255406 1122305 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:43.255650 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:10:43.256122 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:10:43.256339 1122305 start.go:159] libmachine.API.Create for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:10:43.256366 1122305 client.go:168] LocalClient.Create starting
	I0318 14:10:43.256405 1122305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 14:10:43.256454 1122305 main.go:141] libmachine: Decoding PEM data...
	I0318 14:10:43.256475 1122305 main.go:141] libmachine: Parsing certificate...
	I0318 14:10:43.256539 1122305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 14:10:43.256570 1122305 main.go:141] libmachine: Decoding PEM data...
	I0318 14:10:43.256584 1122305 main.go:141] libmachine: Parsing certificate...
	I0318 14:10:43.256604 1122305 main.go:141] libmachine: Running pre-create checks...
	I0318 14:10:43.256621 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .PreCreateCheck
	I0318 14:10:43.260319 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:10:43.260836 1122305 main.go:141] libmachine: Creating machine...
	I0318 14:10:43.260856 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .Create
	I0318 14:10:43.261380 1122305 main.go:141] libmachine: (old-k8s-version-782728) Creating KVM machine...
	I0318 14:10:43.263622 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found existing default KVM network
	I0318 14:10:43.265480 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:43.265268 1122327 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:56:14:d4} reservation:<nil>}
	I0318 14:10:43.266994 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:43.266914 1122327 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015d80}
	I0318 14:10:43.267028 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | created network xml: 
	I0318 14:10:43.267041 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | <network>
	I0318 14:10:43.267053 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |   <name>mk-old-k8s-version-782728</name>
	I0318 14:10:43.267067 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |   <dns enable='no'/>
	I0318 14:10:43.267074 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |   
	I0318 14:10:43.267085 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0318 14:10:43.267096 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |     <dhcp>
	I0318 14:10:43.267106 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0318 14:10:43.267117 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |     </dhcp>
	I0318 14:10:43.267130 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |   </ip>
	I0318 14:10:43.267144 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG |   
	I0318 14:10:43.267240 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | </network>
	I0318 14:10:43.267274 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | 
	I0318 14:10:43.272823 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | trying to create private KVM network mk-old-k8s-version-782728 192.168.50.0/24...
	I0318 14:10:43.368867 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728 ...
	I0318 14:10:43.368903 1122305 main.go:141] libmachine: (old-k8s-version-782728) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:10:43.368925 1122305 main.go:141] libmachine: (old-k8s-version-782728) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:10:43.368946 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | private KVM network mk-old-k8s-version-782728 192.168.50.0/24 created
	I0318 14:10:43.368962 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:43.365053 1122327 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:10:43.691978 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:43.690490 1122327 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa...
	I0318 14:10:43.925665 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:43.925524 1122327 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/old-k8s-version-782728.rawdisk...
	I0318 14:10:43.925702 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Writing magic tar header
	I0318 14:10:43.925717 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Writing SSH key tar header
	I0318 14:10:43.925807 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:43.925724 1122327 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728 ...
	I0318 14:10:43.925880 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728
	I0318 14:10:43.925930 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728 (perms=drwx------)
	I0318 14:10:43.925966 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:10:43.925981 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 14:10:43.925996 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:10:43.926011 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 14:10:43.926026 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:10:43.926051 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:10:43.926067 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 14:10:43.926084 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 14:10:43.926093 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:10:43.926102 1122305 main.go:141] libmachine: (old-k8s-version-782728) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:10:43.926112 1122305 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:10:43.926123 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Checking permissions on dir: /home
	I0318 14:10:43.926147 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Skipping /home - not owner
	I0318 14:10:43.927575 1122305 main.go:141] libmachine: (old-k8s-version-782728) define libvirt domain using xml: 
	I0318 14:10:43.927604 1122305 main.go:141] libmachine: (old-k8s-version-782728) <domain type='kvm'>
	I0318 14:10:43.927615 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <name>old-k8s-version-782728</name>
	I0318 14:10:43.927624 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <memory unit='MiB'>2200</memory>
	I0318 14:10:43.927633 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <vcpu>2</vcpu>
	I0318 14:10:43.927641 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <features>
	I0318 14:10:43.927653 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <acpi/>
	I0318 14:10:43.927664 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <apic/>
	I0318 14:10:43.927680 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <pae/>
	I0318 14:10:43.927690 1122305 main.go:141] libmachine: (old-k8s-version-782728)     
	I0318 14:10:43.927698 1122305 main.go:141] libmachine: (old-k8s-version-782728)   </features>
	I0318 14:10:43.927710 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <cpu mode='host-passthrough'>
	I0318 14:10:43.927722 1122305 main.go:141] libmachine: (old-k8s-version-782728)   
	I0318 14:10:43.927732 1122305 main.go:141] libmachine: (old-k8s-version-782728)   </cpu>
	I0318 14:10:43.927742 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <os>
	I0318 14:10:43.927751 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <type>hvm</type>
	I0318 14:10:43.927762 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <boot dev='cdrom'/>
	I0318 14:10:43.927771 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <boot dev='hd'/>
	I0318 14:10:43.927783 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <bootmenu enable='no'/>
	I0318 14:10:43.927791 1122305 main.go:141] libmachine: (old-k8s-version-782728)   </os>
	I0318 14:10:43.927809 1122305 main.go:141] libmachine: (old-k8s-version-782728)   <devices>
	I0318 14:10:43.927842 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <disk type='file' device='cdrom'>
	I0318 14:10:43.927872 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/boot2docker.iso'/>
	I0318 14:10:43.927886 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <target dev='hdc' bus='scsi'/>
	I0318 14:10:43.927897 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <readonly/>
	I0318 14:10:43.927905 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </disk>
	I0318 14:10:43.927915 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <disk type='file' device='disk'>
	I0318 14:10:43.927924 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:10:43.927950 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/old-k8s-version-782728.rawdisk'/>
	I0318 14:10:43.927969 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <target dev='hda' bus='virtio'/>
	I0318 14:10:43.927977 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </disk>
	I0318 14:10:43.927990 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <interface type='network'>
	I0318 14:10:43.928004 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <source network='mk-old-k8s-version-782728'/>
	I0318 14:10:43.928016 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <model type='virtio'/>
	I0318 14:10:43.928025 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </interface>
	I0318 14:10:43.928037 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <interface type='network'>
	I0318 14:10:43.928046 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <source network='default'/>
	I0318 14:10:43.928056 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <model type='virtio'/>
	I0318 14:10:43.928065 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </interface>
	I0318 14:10:43.928076 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <serial type='pty'>
	I0318 14:10:43.928087 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <target port='0'/>
	I0318 14:10:43.928097 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </serial>
	I0318 14:10:43.928109 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <console type='pty'>
	I0318 14:10:43.928120 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <target type='serial' port='0'/>
	I0318 14:10:43.928129 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </console>
	I0318 14:10:43.928145 1122305 main.go:141] libmachine: (old-k8s-version-782728)     <rng model='virtio'>
	I0318 14:10:43.928193 1122305 main.go:141] libmachine: (old-k8s-version-782728)       <backend model='random'>/dev/random</backend>
	I0318 14:10:43.928235 1122305 main.go:141] libmachine: (old-k8s-version-782728)     </rng>
	I0318 14:10:43.928250 1122305 main.go:141] libmachine: (old-k8s-version-782728)     
	I0318 14:10:43.928261 1122305 main.go:141] libmachine: (old-k8s-version-782728)     
	I0318 14:10:43.928275 1122305 main.go:141] libmachine: (old-k8s-version-782728)   </devices>
	I0318 14:10:43.928290 1122305 main.go:141] libmachine: (old-k8s-version-782728) </domain>
	I0318 14:10:43.928305 1122305 main.go:141] libmachine: (old-k8s-version-782728) 
	I0318 14:10:43.933317 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:68:5e:8b in network default
	I0318 14:10:43.934147 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:43.934177 1122305 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:10:43.935102 1122305 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:10:43.935557 1122305 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:10:43.936382 1122305 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:10:43.937449 1122305 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:10:45.471514 1122305 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:10:45.472513 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:45.473229 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:45.473291 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:45.473216 1122327 retry.go:31] will retry after 197.707672ms: waiting for machine to come up
	I0318 14:10:45.672934 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:45.673576 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:45.673601 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:45.673521 1122327 retry.go:31] will retry after 360.623055ms: waiting for machine to come up
	I0318 14:10:46.036412 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:46.037430 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:46.037455 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:46.037329 1122327 retry.go:31] will retry after 345.556699ms: waiting for machine to come up
	I0318 14:10:46.385235 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:46.385847 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:46.385874 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:46.385795 1122327 retry.go:31] will retry after 557.009366ms: waiting for machine to come up
	I0318 14:10:46.944701 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:46.945291 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:46.945313 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:46.945226 1122327 retry.go:31] will retry after 463.74346ms: waiting for machine to come up
	I0318 14:10:47.411232 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:47.411731 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:47.411767 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:47.411685 1122327 retry.go:31] will retry after 744.419393ms: waiting for machine to come up
	I0318 14:10:48.158287 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:48.158930 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:48.158969 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:48.158857 1122327 retry.go:31] will retry after 1.141477117s: waiting for machine to come up
	I0318 14:10:49.301698 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:49.302263 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:49.302290 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:49.302198 1122327 retry.go:31] will retry after 1.489252212s: waiting for machine to come up
	I0318 14:10:50.793093 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:50.793706 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:50.793739 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:50.793584 1122327 retry.go:31] will retry after 1.377860848s: waiting for machine to come up
	I0318 14:10:52.173075 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:52.173662 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:52.173739 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:52.173600 1122327 retry.go:31] will retry after 1.648489069s: waiting for machine to come up
	I0318 14:10:53.824376 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:53.826998 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:53.827023 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:53.826773 1122327 retry.go:31] will retry after 2.245492058s: waiting for machine to come up
	I0318 14:10:56.075966 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:56.075998 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:56.076014 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:56.074137 1122327 retry.go:31] will retry after 2.400901667s: waiting for machine to come up
	I0318 14:10:58.484465 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:10:58.492040 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:10:58.492079 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:10:58.491978 1122327 retry.go:31] will retry after 2.79331204s: waiting for machine to come up
	I0318 14:11:01.286826 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:01.287475 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:11:01.287493 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:11:01.287392 1122327 retry.go:31] will retry after 3.787029054s: waiting for machine to come up
	I0318 14:11:05.077084 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:05.077667 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:11:05.077700 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:11:05.077599 1122327 retry.go:31] will retry after 6.614503473s: waiting for machine to come up
	I0318 14:11:11.695873 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:11.696522 1122305 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:11:11.696576 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:11.696598 1122305 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:11:11.696949 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728
	I0318 14:11:11.775359 1122305 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:11:11.775401 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:11:11.775412 1122305 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:11:11.778286 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:11.778573 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728
	I0318 14:11:11.778596 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find defined IP address of network mk-old-k8s-version-782728 interface with MAC address 52:54:00:bb:bf:3d
	I0318 14:11:11.778770 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:11:11.778800 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:11:11.778834 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:11:11.778849 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:11:11.778868 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:11:11.782766 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: exit status 255: 
	I0318 14:11:11.782789 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 14:11:11.782817 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | command : exit 0
	I0318 14:11:11.782825 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | err     : exit status 255
	I0318 14:11:11.782842 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | output  : 
	I0318 14:11:14.783564 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:11:14.786130 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:14.786460 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:14.786495 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:14.786595 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:11:14.786616 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:11:14.786635 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:11:14.786642 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:11:14.786668 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:11:14.916187 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:11:14.916588 1122305 main.go:141] libmachine: (old-k8s-version-782728) KVM machine creation complete!
	I0318 14:11:14.916863 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:11:14.917499 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:14.917734 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:14.917932 1122305 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 14:11:14.917961 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:11:14.919491 1122305 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 14:11:14.919513 1122305 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 14:11:14.919521 1122305 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 14:11:14.919531 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:14.921781 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:14.922068 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:14.922101 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:14.922277 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:14.922471 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:14.922691 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:14.922830 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:14.922971 1122305 main.go:141] libmachine: Using SSH client type: native
	I0318 14:11:14.923212 1122305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:11:14.923229 1122305 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 14:11:15.031275 1122305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:11:15.031308 1122305 main.go:141] libmachine: Detecting the provisioner...
	I0318 14:11:15.031320 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:15.034354 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.034744 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.034777 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.034970 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:15.035179 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.035368 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.035561 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:15.035789 1122305 main.go:141] libmachine: Using SSH client type: native
	I0318 14:11:15.036157 1122305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:11:15.036179 1122305 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 14:11:15.145050 1122305 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 14:11:15.145176 1122305 main.go:141] libmachine: found compatible host: buildroot
	I0318 14:11:15.145190 1122305 main.go:141] libmachine: Provisioning with buildroot...
	I0318 14:11:15.145199 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:11:15.145505 1122305 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:11:15.145542 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:11:15.145757 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:15.148554 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.148920 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.148942 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.149197 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:15.149388 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.149592 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.149740 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:15.149998 1122305 main.go:141] libmachine: Using SSH client type: native
	I0318 14:11:15.150237 1122305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:11:15.150256 1122305 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:11:15.276484 1122305 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:11:15.276518 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:15.280450 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.280821 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.280846 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.281051 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:15.281324 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.281505 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.281622 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:15.281828 1122305 main.go:141] libmachine: Using SSH client type: native
	I0318 14:11:15.282040 1122305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:11:15.282059 1122305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:11:15.402652 1122305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:11:15.402687 1122305 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:11:15.402736 1122305 buildroot.go:174] setting up certificates
	I0318 14:11:15.402749 1122305 provision.go:84] configureAuth start
	I0318 14:11:15.402763 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:11:15.403067 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:11:15.405793 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.406172 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.406204 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.406334 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:15.408946 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.409384 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.409428 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.409571 1122305 provision.go:143] copyHostCerts
	I0318 14:11:15.409665 1122305 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:11:15.409683 1122305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:11:15.409786 1122305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:11:15.409985 1122305 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:11:15.410002 1122305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:11:15.410055 1122305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:11:15.410147 1122305 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:11:15.410158 1122305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:11:15.410196 1122305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:11:15.410311 1122305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:11:15.610213 1122305 provision.go:177] copyRemoteCerts
	I0318 14:11:15.610292 1122305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:11:15.610321 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:15.613407 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.613785 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.613813 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.614024 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:15.614267 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.614435 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:15.614590 1122305 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:11:15.701037 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:11:15.733666 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:11:15.763620 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 14:11:15.793076 1122305 provision.go:87] duration metric: took 390.306833ms to configureAuth
	I0318 14:11:15.793111 1122305 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:11:15.793299 1122305 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:11:15.793382 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:15.796599 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.796947 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:15.796972 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:15.797203 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:15.797435 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.797624 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:15.797748 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:15.797884 1122305 main.go:141] libmachine: Using SSH client type: native
	I0318 14:11:15.798072 1122305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:11:15.798092 1122305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:11:16.095692 1122305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:11:16.095729 1122305 main.go:141] libmachine: Checking connection to Docker...
	I0318 14:11:16.095741 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetURL
	I0318 14:11:16.097194 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using libvirt version 6000000
	I0318 14:11:16.099907 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.100322 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.100378 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.100568 1122305 main.go:141] libmachine: Docker is up and running!
	I0318 14:11:16.100584 1122305 main.go:141] libmachine: Reticulating splines...
	I0318 14:11:16.100605 1122305 client.go:171] duration metric: took 32.844217743s to LocalClient.Create
	I0318 14:11:16.100629 1122305 start.go:167] duration metric: took 32.844293809s to libmachine.API.Create "old-k8s-version-782728"
	I0318 14:11:16.100638 1122305 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:11:16.100648 1122305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:11:16.100669 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:16.100927 1122305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:11:16.100954 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:16.103317 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.103666 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.103703 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.103973 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:16.104162 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:16.104358 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:16.104532 1122305 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:11:16.187351 1122305 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:11:16.192281 1122305 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:11:16.192309 1122305 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:11:16.192370 1122305 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:11:16.192453 1122305 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:11:16.192539 1122305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:11:16.202897 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:11:16.231667 1122305 start.go:296] duration metric: took 131.011235ms for postStartSetup
	I0318 14:11:16.231726 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:11:16.232408 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:11:16.235801 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.236270 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.236308 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.236538 1122305 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:11:16.236731 1122305 start.go:128] duration metric: took 33.002728183s to createHost
	I0318 14:11:16.236758 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:16.239186 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.239611 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.239635 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.239848 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:16.240090 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:16.240257 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:16.240402 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:16.240584 1122305 main.go:141] libmachine: Using SSH client type: native
	I0318 14:11:16.240768 1122305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:11:16.240784 1122305 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 14:11:16.349336 1122305 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771076.339451831
	
	I0318 14:11:16.349365 1122305 fix.go:216] guest clock: 1710771076.339451831
	I0318 14:11:16.349399 1122305 fix.go:229] Guest: 2024-03-18 14:11:16.339451831 +0000 UTC Remote: 2024-03-18 14:11:16.236744713 +0000 UTC m=+33.149235953 (delta=102.707118ms)
	I0318 14:11:16.349427 1122305 fix.go:200] guest clock delta is within tolerance: 102.707118ms
	I0318 14:11:16.349435 1122305 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 33.115555193s
	I0318 14:11:16.349460 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:16.349797 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:11:16.353161 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.353637 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.353665 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.353922 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:16.354496 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:16.354671 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:11:16.354777 1122305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:11:16.354831 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:16.354942 1122305 ssh_runner.go:195] Run: cat /version.json
	I0318 14:11:16.354970 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:11:16.357931 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.357960 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.358411 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.358486 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.358568 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:16.358592 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:16.358795 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:16.358812 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:11:16.359019 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:16.359056 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:11:16.359250 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:16.359287 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:11:16.359424 1122305 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:11:16.359770 1122305 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:11:16.445837 1122305 ssh_runner.go:195] Run: systemctl --version
	I0318 14:11:16.473563 1122305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:11:16.652028 1122305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:11:16.658954 1122305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:11:16.659039 1122305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:11:16.687229 1122305 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:11:16.687261 1122305 start.go:494] detecting cgroup driver to use...
	I0318 14:11:16.687352 1122305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:11:16.709329 1122305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:11:16.725031 1122305 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:11:16.725095 1122305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:11:16.740710 1122305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:11:16.756272 1122305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:11:16.902138 1122305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:11:17.061752 1122305 docker.go:233] disabling docker service ...
	I0318 14:11:17.061827 1122305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:11:17.078119 1122305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:11:17.093132 1122305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:11:17.257193 1122305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:11:17.384452 1122305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:11:17.403125 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:11:17.425152 1122305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:11:17.425234 1122305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:11:17.437365 1122305 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:11:17.437443 1122305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:11:17.449785 1122305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:11:17.461181 1122305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:11:17.472937 1122305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:11:17.484454 1122305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:11:17.495392 1122305 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:11:17.495452 1122305 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:11:17.510878 1122305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:11:17.521830 1122305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:11:17.674768 1122305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:11:17.848404 1122305 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:11:17.848507 1122305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:11:17.854297 1122305 start.go:562] Will wait 60s for crictl version
	I0318 14:11:17.854364 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:17.859316 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:11:17.904567 1122305 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:11:17.904684 1122305 ssh_runner.go:195] Run: crio --version
	I0318 14:11:17.935820 1122305 ssh_runner.go:195] Run: crio --version
	I0318 14:11:17.978623 1122305 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:11:17.979960 1122305 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:11:17.984120 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:17.984596 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:11:01 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:11:17.984659 1122305 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:11:17.984885 1122305 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:11:17.990207 1122305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:11:18.005678 1122305 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:11:18.005830 1122305 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:11:18.005903 1122305 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:11:18.048576 1122305 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:11:18.048674 1122305 ssh_runner.go:195] Run: which lz4
	I0318 14:11:18.054760 1122305 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 14:11:18.060544 1122305 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:11:18.060590 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:11:20.159801 1122305 crio.go:444] duration metric: took 2.105087936s to copy over tarball
	I0318 14:11:20.159931 1122305 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:11:23.367085 1122305 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.207115966s)
	I0318 14:11:23.367122 1122305 crio.go:451] duration metric: took 3.207266915s to extract the tarball
	I0318 14:11:23.367132 1122305 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:11:23.413510 1122305 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:11:23.467679 1122305 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:11:23.467712 1122305 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:11:23.467796 1122305 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:11:23.467852 1122305 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:11:23.468117 1122305 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:11:23.468159 1122305 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:11:23.468300 1122305 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:11:23.468347 1122305 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:11:23.468485 1122305 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:11:23.468505 1122305 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:11:23.470545 1122305 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:11:23.470584 1122305 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:11:23.470597 1122305 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:11:23.470545 1122305 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:11:23.470552 1122305 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:11:23.470625 1122305 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:11:23.470675 1122305 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:11:23.471121 1122305 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:11:23.625936 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:11:23.627165 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:11:23.636176 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:11:23.640646 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:11:23.642335 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:11:23.678025 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:11:23.679915 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:11:23.731019 1122305 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:11:23.731067 1122305 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:11:23.731128 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.735016 1122305 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:11:23.735065 1122305 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:11:23.735119 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.771522 1122305 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:11:23.771577 1122305 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:11:23.771631 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.803893 1122305 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:11:23.803938 1122305 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:11:23.804006 1122305 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:11:23.804030 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.804051 1122305 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:11:23.804103 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.830800 1122305 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:11:23.830859 1122305 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:11:23.830917 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.846715 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:11:23.846771 1122305 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:11:23.846816 1122305 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:11:23.846829 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:11:23.846844 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:11:23.846857 1122305 ssh_runner.go:195] Run: which crictl
	I0318 14:11:23.846962 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:11:23.846992 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:11:23.847024 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:11:23.996990 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:11:23.997016 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:11:23.997106 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:11:23.997128 1122305 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:11:24.000315 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:11:24.000319 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:11:24.000372 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:11:24.036144 1122305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:11:24.132968 1122305 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:11:24.283423 1122305 cache_images.go:92] duration metric: took 815.68651ms to LoadCachedImages
	W0318 14:11:24.283578 1122305 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0318 14:11:24.283597 1122305 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:11:24.283758 1122305 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:11:24.283874 1122305 ssh_runner.go:195] Run: crio config
	I0318 14:11:24.345959 1122305 cni.go:84] Creating CNI manager for ""
	I0318 14:11:24.345988 1122305 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:11:24.346003 1122305 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:11:24.346030 1122305 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:11:24.346204 1122305 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:11:24.346274 1122305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:11:24.357808 1122305 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:11:24.357897 1122305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:11:24.370835 1122305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:11:24.393543 1122305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:11:24.412548 1122305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:11:24.431966 1122305 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:11:24.436418 1122305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:11:24.450534 1122305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:11:24.583698 1122305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:11:24.602360 1122305 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:11:24.602388 1122305 certs.go:194] generating shared ca certs ...
	I0318 14:11:24.602411 1122305 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:24.602610 1122305 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:11:24.602667 1122305 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:11:24.602681 1122305 certs.go:256] generating profile certs ...
	I0318 14:11:24.602761 1122305 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:11:24.602779 1122305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.crt with IP's: []
	I0318 14:11:25.052295 1122305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.crt ...
	I0318 14:11:25.052334 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.crt: {Name:mkfb731b27dd8e3cafd52e65cd550b1408b7809a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:25.052530 1122305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key ...
	I0318 14:11:25.052546 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key: {Name:mkfb22a6ad532f18232fb6e2a7de61d6463476c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:25.052664 1122305 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:11:25.052683 1122305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt.07e4f612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.229]
	I0318 14:11:25.283420 1122305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt.07e4f612 ...
	I0318 14:11:25.283459 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt.07e4f612: {Name:mkdb0c0db21e157c511427916ed1803b90af7970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:25.283677 1122305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612 ...
	I0318 14:11:25.283699 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612: {Name:mke9d38866093244a9e82de322da4da42fe430ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:25.283816 1122305 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt.07e4f612 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt
	I0318 14:11:25.284717 1122305 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key
	I0318 14:11:25.284837 1122305 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:11:25.284869 1122305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt with IP's: []
	I0318 14:11:25.435012 1122305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt ...
	I0318 14:11:25.435052 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt: {Name:mk929efc8ed165cf35172678bf1e301d50bd7c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:25.435277 1122305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key ...
	I0318 14:11:25.435303 1122305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key: {Name:mk5fc5169f7ce199d021610b7b99ea62937cdeef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:11:25.435560 1122305 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:11:25.435622 1122305 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:11:25.435637 1122305 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:11:25.435672 1122305 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:11:25.435706 1122305 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:11:25.435734 1122305 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:11:25.435789 1122305 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:11:25.436570 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:11:25.499817 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:11:25.558385 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:11:25.631623 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:11:25.671420 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:11:25.712358 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:11:25.750610 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:11:25.787807 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:11:25.825305 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:11:25.862212 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:11:25.901548 1122305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:11:25.940196 1122305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:11:25.963602 1122305 ssh_runner.go:195] Run: openssl version
	I0318 14:11:25.973038 1122305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:11:25.989155 1122305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:11:25.997204 1122305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:11:25.997292 1122305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:11:26.006252 1122305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:11:26.022964 1122305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:11:26.042405 1122305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:11:26.048872 1122305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:11:26.048937 1122305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:11:26.056169 1122305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:11:26.072431 1122305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:11:26.088652 1122305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:11:26.095009 1122305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:11:26.095105 1122305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:11:26.103895 1122305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:11:26.120368 1122305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:11:26.126420 1122305 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 14:11:26.126499 1122305 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:11:26.126619 1122305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:11:26.126714 1122305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:11:26.174317 1122305 cri.go:89] found id: ""
	I0318 14:11:26.174424 1122305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 14:11:26.188570 1122305 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:11:26.202015 1122305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:11:26.214384 1122305 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:11:26.214413 1122305 kubeadm.go:156] found existing configuration files:
	
	I0318 14:11:26.214476 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:11:26.226832 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:11:26.226911 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:11:26.247870 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:11:26.273906 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:11:26.274012 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:11:26.289085 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:11:26.306071 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:11:26.306151 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:11:26.320232 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:11:26.347419 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:11:26.347496 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:11:26.363356 1122305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:11:26.522448 1122305 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:11:26.522870 1122305 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:11:26.722006 1122305 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:11:26.722156 1122305 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:11:26.722278 1122305 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:11:27.056683 1122305 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:11:27.060078 1122305 out.go:204]   - Generating certificates and keys ...
	I0318 14:11:27.060188 1122305 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:11:27.060267 1122305 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:11:27.317219 1122305 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 14:11:27.393852 1122305 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 14:11:27.583754 1122305 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 14:11:27.709775 1122305 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 14:11:27.961368 1122305 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 14:11:27.961853 1122305 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-782728] and IPs [192.168.50.229 127.0.0.1 ::1]
	I0318 14:11:28.276309 1122305 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 14:11:28.276500 1122305 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-782728] and IPs [192.168.50.229 127.0.0.1 ::1]
	I0318 14:11:28.955292 1122305 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 14:11:29.495672 1122305 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 14:11:29.773008 1122305 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 14:11:29.773347 1122305 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:11:29.859325 1122305 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:11:30.398761 1122305 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:11:30.676223 1122305 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:11:30.863405 1122305 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:11:30.879682 1122305 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:11:30.880846 1122305 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:11:30.881508 1122305 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:11:31.019294 1122305 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:11:31.021271 1122305 out.go:204]   - Booting up control plane ...
	I0318 14:11:31.021425 1122305 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:11:31.030253 1122305 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:11:31.031350 1122305 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:11:31.032142 1122305 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:11:31.036303 1122305 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:12:11.035613 1122305 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:12:11.036422 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:12:11.036722 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:12:16.037089 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:12:16.037311 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:12:26.038334 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:12:26.038593 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:12:46.039205 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:12:46.039525 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:13:26.040006 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:13:26.040621 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:13:26.040663 1122305 kubeadm.go:309] 
	I0318 14:13:26.040751 1122305 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:13:26.040879 1122305 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:13:26.040899 1122305 kubeadm.go:309] 
	I0318 14:13:26.040983 1122305 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:13:26.041087 1122305 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:13:26.041350 1122305 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:13:26.041364 1122305 kubeadm.go:309] 
	I0318 14:13:26.041614 1122305 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:13:26.041706 1122305 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:13:26.041805 1122305 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:13:26.041826 1122305 kubeadm.go:309] 
	I0318 14:13:26.042066 1122305 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:13:26.042252 1122305 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:13:26.042266 1122305 kubeadm.go:309] 
	I0318 14:13:26.042448 1122305 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:13:26.042595 1122305 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:13:26.042789 1122305 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:13:26.042972 1122305 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:13:26.042992 1122305 kubeadm.go:309] 
	I0318 14:13:26.044399 1122305 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:13:26.044535 1122305 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:13:26.044714 1122305 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:13:26.044830 1122305 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-782728] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-782728] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-782728] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-782728] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:13:26.044886 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:13:28.486643 1122305 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.441729511s)
	I0318 14:13:28.486741 1122305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:13:28.501582 1122305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:13:28.512530 1122305 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:13:28.512551 1122305 kubeadm.go:156] found existing configuration files:
	
	I0318 14:13:28.512599 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:13:28.523484 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:13:28.523544 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:13:28.533918 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:13:28.544622 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:13:28.544679 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:13:28.555175 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:13:28.565508 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:13:28.565576 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:13:28.576782 1122305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:13:28.587368 1122305 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:13:28.587445 1122305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:13:28.598332 1122305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:13:28.827215 1122305 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:15:25.179643 1122305 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:15:25.179749 1122305 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:15:25.180771 1122305 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:15:25.180838 1122305 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:15:25.180919 1122305 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:15:25.181006 1122305 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:15:25.181096 1122305 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:15:25.181153 1122305 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:15:25.183055 1122305 out.go:204]   - Generating certificates and keys ...
	I0318 14:15:25.183121 1122305 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:15:25.183174 1122305 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:15:25.183240 1122305 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:15:25.183291 1122305 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:15:25.183348 1122305 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:15:25.183394 1122305 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:15:25.183454 1122305 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:15:25.183506 1122305 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:15:25.183577 1122305 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:15:25.183688 1122305 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:15:25.183756 1122305 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:15:25.183854 1122305 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:15:25.183948 1122305 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:15:25.184000 1122305 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:15:25.184053 1122305 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:15:25.184098 1122305 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:15:25.184190 1122305 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:15:25.184264 1122305 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:15:25.184324 1122305 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:15:25.184431 1122305 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:15:25.186885 1122305 out.go:204]   - Booting up control plane ...
	I0318 14:15:25.187024 1122305 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:15:25.187091 1122305 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:15:25.187151 1122305 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:15:25.187225 1122305 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:15:25.187377 1122305 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:15:25.187435 1122305 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:15:25.187493 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:15:25.187669 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:15:25.187735 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:15:25.187911 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:15:25.187970 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:15:25.188119 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:15:25.188194 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:15:25.188400 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:15:25.188481 1122305 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:15:25.188707 1122305 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:15:25.188729 1122305 kubeadm.go:309] 
	I0318 14:15:25.188774 1122305 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:15:25.188833 1122305 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:15:25.188848 1122305 kubeadm.go:309] 
	I0318 14:15:25.188878 1122305 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:15:25.188910 1122305 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:15:25.189007 1122305 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:15:25.189014 1122305 kubeadm.go:309] 
	I0318 14:15:25.189104 1122305 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:15:25.189134 1122305 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:15:25.189166 1122305 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:15:25.189172 1122305 kubeadm.go:309] 
	I0318 14:15:25.189253 1122305 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:15:25.189330 1122305 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:15:25.189340 1122305 kubeadm.go:309] 
	I0318 14:15:25.189436 1122305 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:15:25.189520 1122305 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:15:25.189588 1122305 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:15:25.189647 1122305 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:15:25.189692 1122305 kubeadm.go:309] 
	I0318 14:15:25.189712 1122305 kubeadm.go:393] duration metric: took 3m59.063221445s to StartCluster
	I0318 14:15:25.189754 1122305 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:15:25.189808 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:15:25.239252 1122305 cri.go:89] found id: ""
	I0318 14:15:25.239290 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.239302 1122305 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:15:25.239310 1122305 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:15:25.239369 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:15:25.287340 1122305 cri.go:89] found id: ""
	I0318 14:15:25.287368 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.287375 1122305 logs.go:278] No container was found matching "etcd"
	I0318 14:15:25.287381 1122305 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:15:25.287445 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:15:25.327218 1122305 cri.go:89] found id: ""
	I0318 14:15:25.327249 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.327260 1122305 logs.go:278] No container was found matching "coredns"
	I0318 14:15:25.327268 1122305 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:15:25.327328 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:15:25.364884 1122305 cri.go:89] found id: ""
	I0318 14:15:25.364918 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.364927 1122305 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:15:25.364933 1122305 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:15:25.364987 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:15:25.401037 1122305 cri.go:89] found id: ""
	I0318 14:15:25.401072 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.401080 1122305 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:15:25.401087 1122305 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:15:25.401138 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:15:25.443430 1122305 cri.go:89] found id: ""
	I0318 14:15:25.443460 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.443469 1122305 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:15:25.443478 1122305 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:15:25.443539 1122305 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:15:25.478498 1122305 cri.go:89] found id: ""
	I0318 14:15:25.478529 1122305 logs.go:276] 0 containers: []
	W0318 14:15:25.478538 1122305 logs.go:278] No container was found matching "kindnet"
	I0318 14:15:25.478554 1122305 logs.go:123] Gathering logs for kubelet ...
	I0318 14:15:25.478569 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:15:25.529139 1122305 logs.go:123] Gathering logs for dmesg ...
	I0318 14:15:25.529181 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:15:25.543745 1122305 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:15:25.543781 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:15:25.692134 1122305 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:15:25.692159 1122305 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:15:25.692176 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:15:25.788869 1122305 logs.go:123] Gathering logs for container status ...
	I0318 14:15:25.788912 1122305 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:15:25.831141 1122305 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:15:25.831185 1122305 out.go:239] * 
	* 
	W0318 14:15:25.831240 1122305 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:15:25.831263 1122305 out.go:239] * 
	* 
	W0318 14:15:25.832137 1122305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:15:25.835858 1122305 out.go:177] 
	W0318 14:15:25.837037 1122305 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:15:25.837102 1122305 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:15:25.837137 1122305 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:15:25.838601 1122305 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-782728 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 6 (245.065259ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:15:26.126067 1128275 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-782728" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-188109 --alsologtostderr -v=3
E0318 14:13:28.543248 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:30.628892 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-188109 --alsologtostderr -v=3: exit status 82 (2m0.541712293s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-188109"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:13:24.288612 1127708 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:13:24.288740 1127708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:13:24.288751 1127708 out.go:304] Setting ErrFile to fd 2...
	I0318 14:13:24.288756 1127708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:13:24.288981 1127708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:13:24.289273 1127708 out.go:298] Setting JSON to false
	I0318 14:13:24.289381 1127708 mustload.go:65] Loading cluster: no-preload-188109
	I0318 14:13:24.289743 1127708 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:13:24.289823 1127708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:13:24.290016 1127708 mustload.go:65] Loading cluster: no-preload-188109
	I0318 14:13:24.290161 1127708 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:13:24.290209 1127708 stop.go:39] StopHost: no-preload-188109
	I0318 14:13:24.290642 1127708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:13:24.290706 1127708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:13:24.306058 1127708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0318 14:13:24.306540 1127708 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:13:24.307183 1127708 main.go:141] libmachine: Using API Version  1
	I0318 14:13:24.307210 1127708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:13:24.307548 1127708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:13:24.310219 1127708 out.go:177] * Stopping node "no-preload-188109"  ...
	I0318 14:13:24.312173 1127708 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 14:13:24.312230 1127708 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:13:24.312525 1127708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 14:13:24.312554 1127708 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:13:24.315417 1127708 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:13:24.315941 1127708 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:13:24.315973 1127708 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:13:24.316146 1127708 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:13:24.316345 1127708 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:13:24.316525 1127708 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:13:24.316671 1127708 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:13:24.420133 1127708 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 14:13:24.490550 1127708 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 14:13:24.552984 1127708 main.go:141] libmachine: Stopping "no-preload-188109"...
	I0318 14:13:24.553023 1127708 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:13:24.554753 1127708 main.go:141] libmachine: (no-preload-188109) Calling .Stop
	I0318 14:13:24.558561 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 0/120
	I0318 14:13:25.560117 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 1/120
	I0318 14:13:26.562411 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 2/120
	I0318 14:13:27.564029 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 3/120
	I0318 14:13:28.566720 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 4/120
	I0318 14:13:29.569052 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 5/120
	I0318 14:13:30.570586 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 6/120
	I0318 14:13:31.572350 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 7/120
	I0318 14:13:32.573805 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 8/120
	I0318 14:13:33.575258 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 9/120
	I0318 14:13:34.577388 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 10/120
	I0318 14:13:35.579289 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 11/120
	I0318 14:13:36.580715 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 12/120
	I0318 14:13:37.582502 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 13/120
	I0318 14:13:38.584079 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 14/120
	I0318 14:13:39.585980 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 15/120
	I0318 14:13:40.587705 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 16/120
	I0318 14:13:41.589192 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 17/120
	I0318 14:13:42.590676 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 18/120
	I0318 14:13:43.592248 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 19/120
	I0318 14:13:44.594235 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 20/120
	I0318 14:13:45.595638 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 21/120
	I0318 14:13:46.597707 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 22/120
	I0318 14:13:47.599188 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 23/120
	I0318 14:13:48.600806 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 24/120
	I0318 14:13:49.602268 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 25/120
	I0318 14:13:50.604022 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 26/120
	I0318 14:13:51.605534 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 27/120
	I0318 14:13:52.606979 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 28/120
	I0318 14:13:53.608648 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 29/120
	I0318 14:13:54.610483 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 30/120
	I0318 14:13:55.612076 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 31/120
	I0318 14:13:56.614255 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 32/120
	I0318 14:13:57.616296 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 33/120
	I0318 14:13:58.617499 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 34/120
	I0318 14:13:59.619969 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 35/120
	I0318 14:14:00.622450 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 36/120
	I0318 14:14:01.623920 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 37/120
	I0318 14:14:02.625414 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 38/120
	I0318 14:14:03.626900 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 39/120
	I0318 14:14:04.629111 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 40/120
	I0318 14:14:05.630888 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 41/120
	I0318 14:14:06.632414 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 42/120
	I0318 14:14:07.634748 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 43/120
	I0318 14:14:08.636294 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 44/120
	I0318 14:14:09.638319 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 45/120
	I0318 14:14:10.639918 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 46/120
	I0318 14:14:11.641406 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 47/120
	I0318 14:14:12.642901 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 48/120
	I0318 14:14:13.644282 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 49/120
	I0318 14:14:14.645965 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 50/120
	I0318 14:14:15.647528 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 51/120
	I0318 14:14:16.649220 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 52/120
	I0318 14:14:17.650701 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 53/120
	I0318 14:14:18.652783 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 54/120
	I0318 14:14:19.654362 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 55/120
	I0318 14:14:20.655731 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 56/120
	I0318 14:14:21.657430 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 57/120
	I0318 14:14:22.658800 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 58/120
	I0318 14:14:23.660360 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 59/120
	I0318 14:14:24.662849 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 60/120
	I0318 14:14:25.664512 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 61/120
	I0318 14:14:26.666301 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 62/120
	I0318 14:14:27.667699 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 63/120
	I0318 14:14:28.669363 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 64/120
	I0318 14:14:29.671542 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 65/120
	I0318 14:14:30.672892 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 66/120
	I0318 14:14:31.674559 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 67/120
	I0318 14:14:32.676219 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 68/120
	I0318 14:14:33.677615 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 69/120
	I0318 14:14:34.679820 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 70/120
	I0318 14:14:35.681368 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 71/120
	I0318 14:14:36.682982 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 72/120
	I0318 14:14:37.684465 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 73/120
	I0318 14:14:38.686389 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 74/120
	I0318 14:14:39.688511 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 75/120
	I0318 14:14:40.689952 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 76/120
	I0318 14:14:41.691350 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 77/120
	I0318 14:14:42.692934 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 78/120
	I0318 14:14:43.694376 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 79/120
	I0318 14:14:44.696838 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 80/120
	I0318 14:14:45.698387 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 81/120
	I0318 14:14:46.699779 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 82/120
	I0318 14:14:47.701384 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 83/120
	I0318 14:14:48.702832 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 84/120
	I0318 14:14:49.705031 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 85/120
	I0318 14:14:50.706529 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 86/120
	I0318 14:14:51.708058 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 87/120
	I0318 14:14:52.710530 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 88/120
	I0318 14:14:53.712251 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 89/120
	I0318 14:14:54.713711 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 90/120
	I0318 14:14:55.715012 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 91/120
	I0318 14:14:56.716571 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 92/120
	I0318 14:14:57.718089 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 93/120
	I0318 14:14:58.719351 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 94/120
	I0318 14:14:59.721578 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 95/120
	I0318 14:15:00.722800 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 96/120
	I0318 14:15:01.724219 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 97/120
	I0318 14:15:02.725548 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 98/120
	I0318 14:15:03.727234 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 99/120
	I0318 14:15:04.728658 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 100/120
	I0318 14:15:05.730258 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 101/120
	I0318 14:15:06.731735 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 102/120
	I0318 14:15:07.733377 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 103/120
	I0318 14:15:08.734802 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 104/120
	I0318 14:15:09.736894 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 105/120
	I0318 14:15:10.738863 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 106/120
	I0318 14:15:11.740202 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 107/120
	I0318 14:15:12.741562 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 108/120
	I0318 14:15:13.743099 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 109/120
	I0318 14:15:14.745510 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 110/120
	I0318 14:15:15.747066 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 111/120
	I0318 14:15:16.748700 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 112/120
	I0318 14:15:17.750088 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 113/120
	I0318 14:15:18.751935 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 114/120
	I0318 14:15:19.754065 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 115/120
	I0318 14:15:20.755401 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 116/120
	I0318 14:15:21.757008 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 117/120
	I0318 14:15:22.758337 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 118/120
	I0318 14:15:23.759849 1127708 main.go:141] libmachine: (no-preload-188109) Waiting for machine to stop 119/120
	I0318 14:15:24.760627 1127708 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 14:15:24.760688 1127708 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 14:15:24.762597 1127708 out.go:177] 
	W0318 14:15:24.763807 1127708 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 14:15:24.763853 1127708 out.go:239] * 
	* 
	W0318 14:15:24.768773 1127708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:15:24.770213 1127708 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-188109 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109: exit status 3 (18.560882899s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:15:43.332261 1128242 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host
	E0318 14:15:43.332287 1128242 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-188109" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-767719 --alsologtostderr -v=3
E0318 14:13:51.109780 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:59.264224 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:14:00.517690 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:00.522949 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:00.533623 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:00.553978 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-767719 --alsologtostderr -v=3: exit status 82 (2m0.567420714s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-767719"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:13:49.121038 1127852 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:13:49.121207 1127852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:13:49.121218 1127852 out.go:304] Setting ErrFile to fd 2...
	I0318 14:13:49.121223 1127852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:13:49.121901 1127852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:13:49.122313 1127852 out.go:298] Setting JSON to false
	I0318 14:13:49.122428 1127852 mustload.go:65] Loading cluster: embed-certs-767719
	I0318 14:13:49.123390 1127852 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:13:49.123469 1127852 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:13:49.123762 1127852 mustload.go:65] Loading cluster: embed-certs-767719
	I0318 14:13:49.123913 1127852 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:13:49.123942 1127852 stop.go:39] StopHost: embed-certs-767719
	I0318 14:13:49.124405 1127852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:13:49.124466 1127852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:13:49.142024 1127852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0318 14:13:49.142551 1127852 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:13:49.143222 1127852 main.go:141] libmachine: Using API Version  1
	I0318 14:13:49.143244 1127852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:13:49.143627 1127852 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:13:49.145679 1127852 out.go:177] * Stopping node "embed-certs-767719"  ...
	I0318 14:13:49.147472 1127852 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 14:13:49.147503 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:13:49.147758 1127852 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 14:13:49.147785 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:13:49.150621 1127852 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:13:49.151047 1127852 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:13:49.151094 1127852 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:13:49.151225 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:13:49.151422 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:13:49.151590 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:13:49.151785 1127852 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:13:49.258505 1127852 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 14:13:49.331819 1127852 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 14:13:49.419802 1127852 main.go:141] libmachine: Stopping "embed-certs-767719"...
	I0318 14:13:49.419887 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:13:49.421513 1127852 main.go:141] libmachine: (embed-certs-767719) Calling .Stop
	I0318 14:13:49.424905 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 0/120
	I0318 14:13:50.426865 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 1/120
	I0318 14:13:51.428495 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 2/120
	I0318 14:13:52.429806 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 3/120
	I0318 14:13:53.431511 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 4/120
	I0318 14:13:54.433600 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 5/120
	I0318 14:13:55.435004 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 6/120
	I0318 14:13:56.436548 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 7/120
	I0318 14:13:57.437823 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 8/120
	I0318 14:13:58.439277 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 9/120
	I0318 14:13:59.441734 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 10/120
	I0318 14:14:00.443202 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 11/120
	I0318 14:14:01.444601 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 12/120
	I0318 14:14:02.446581 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 13/120
	I0318 14:14:03.447897 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 14/120
	I0318 14:14:04.449951 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 15/120
	I0318 14:14:05.451865 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 16/120
	I0318 14:14:06.453035 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 17/120
	I0318 14:14:07.454303 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 18/120
	I0318 14:14:08.455588 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 19/120
	I0318 14:14:09.457615 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 20/120
	I0318 14:14:10.459245 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 21/120
	I0318 14:14:11.460912 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 22/120
	I0318 14:14:12.462425 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 23/120
	I0318 14:14:13.463852 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 24/120
	I0318 14:14:14.466229 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 25/120
	I0318 14:14:15.467668 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 26/120
	I0318 14:14:16.469238 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 27/120
	I0318 14:14:17.470889 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 28/120
	I0318 14:14:18.472510 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 29/120
	I0318 14:14:19.474082 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 30/120
	I0318 14:14:20.475568 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 31/120
	I0318 14:14:21.478051 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 32/120
	I0318 14:14:22.479542 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 33/120
	I0318 14:14:23.481878 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 34/120
	I0318 14:14:24.484105 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 35/120
	I0318 14:14:25.485566 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 36/120
	I0318 14:14:26.487113 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 37/120
	I0318 14:14:27.488458 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 38/120
	I0318 14:14:28.490020 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 39/120
	I0318 14:14:29.492348 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 40/120
	I0318 14:14:30.493974 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 41/120
	I0318 14:14:31.495397 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 42/120
	I0318 14:14:32.496904 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 43/120
	I0318 14:14:33.498398 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 44/120
	I0318 14:14:34.500581 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 45/120
	I0318 14:14:35.502069 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 46/120
	I0318 14:14:36.503747 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 47/120
	I0318 14:14:37.504979 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 48/120
	I0318 14:14:38.506531 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 49/120
	I0318 14:14:39.508307 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 50/120
	I0318 14:14:40.510343 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 51/120
	I0318 14:14:41.511586 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 52/120
	I0318 14:14:42.512973 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 53/120
	I0318 14:14:43.514362 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 54/120
	I0318 14:14:44.516685 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 55/120
	I0318 14:14:45.518128 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 56/120
	I0318 14:14:46.519478 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 57/120
	I0318 14:14:47.520941 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 58/120
	I0318 14:14:48.522360 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 59/120
	I0318 14:14:49.524662 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 60/120
	I0318 14:14:50.526076 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 61/120
	I0318 14:14:51.527269 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 62/120
	I0318 14:14:52.528807 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 63/120
	I0318 14:14:53.530194 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 64/120
	I0318 14:14:54.532493 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 65/120
	I0318 14:14:55.533892 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 66/120
	I0318 14:14:56.535115 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 67/120
	I0318 14:14:57.536594 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 68/120
	I0318 14:14:58.537953 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 69/120
	I0318 14:14:59.539251 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 70/120
	I0318 14:15:00.540613 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 71/120
	I0318 14:15:01.542064 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 72/120
	I0318 14:15:02.543401 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 73/120
	I0318 14:15:03.544897 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 74/120
	I0318 14:15:04.547053 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 75/120
	I0318 14:15:05.548389 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 76/120
	I0318 14:15:06.549836 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 77/120
	I0318 14:15:07.551380 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 78/120
	I0318 14:15:08.552898 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 79/120
	I0318 14:15:09.554284 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 80/120
	I0318 14:15:10.555887 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 81/120
	I0318 14:15:11.558352 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 82/120
	I0318 14:15:12.559750 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 83/120
	I0318 14:15:13.561287 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 84/120
	I0318 14:15:14.563523 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 85/120
	I0318 14:15:15.565321 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 86/120
	I0318 14:15:16.566840 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 87/120
	I0318 14:15:17.568448 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 88/120
	I0318 14:15:18.569856 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 89/120
	I0318 14:15:19.572244 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 90/120
	I0318 14:15:20.574693 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 91/120
	I0318 14:15:21.576313 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 92/120
	I0318 14:15:22.577779 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 93/120
	I0318 14:15:23.579134 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 94/120
	I0318 14:15:24.581251 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 95/120
	I0318 14:15:25.582974 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 96/120
	I0318 14:15:26.584294 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 97/120
	I0318 14:15:27.585791 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 98/120
	I0318 14:15:28.587245 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 99/120
	I0318 14:15:29.589658 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 100/120
	I0318 14:15:30.591197 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 101/120
	I0318 14:15:31.592651 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 102/120
	I0318 14:15:32.594289 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 103/120
	I0318 14:15:33.595805 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 104/120
	I0318 14:15:34.598058 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 105/120
	I0318 14:15:35.599450 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 106/120
	I0318 14:15:36.600825 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 107/120
	I0318 14:15:37.602444 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 108/120
	I0318 14:15:38.603859 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 109/120
	I0318 14:15:39.605230 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 110/120
	I0318 14:15:40.607063 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 111/120
	I0318 14:15:41.608496 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 112/120
	I0318 14:15:42.610144 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 113/120
	I0318 14:15:43.611581 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 114/120
	I0318 14:15:44.613658 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 115/120
	I0318 14:15:45.615183 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 116/120
	I0318 14:15:46.616137 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 117/120
	I0318 14:15:47.617552 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 118/120
	I0318 14:15:48.619187 1127852 main.go:141] libmachine: (embed-certs-767719) Waiting for machine to stop 119/120
	I0318 14:15:49.620232 1127852 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 14:15:49.620299 1127852 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 14:15:49.622554 1127852 out.go:177] 
	W0318 14:15:49.623968 1127852 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 14:15:49.623983 1127852 out.go:239] * 
	* 
	W0318 14:15:49.628762 1127852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:15:49.630402 1127852 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-767719 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719: exit status 3 (18.533008858s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:16:08.164239 1128512 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host
	E0318 14:16:08.164265 1128512 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-767719" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-075922 --alsologtostderr -v=3
E0318 14:14:10.758429 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:17.919383 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 14:14:20.999687 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:32.070235 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:14:40.224671 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:14:41.480766 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:15:12.748033 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:12.753294 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:12.763572 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:12.783922 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:12.824277 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:12.904670 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:13.065197 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:13.385647 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:14.025902 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:15.306503 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:17.867512 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:22.441591 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:15:22.987807 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-075922 --alsologtostderr -v=3: exit status 82 (2m0.506854918s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-075922"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:14:10.011570 1128011 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:14:10.011769 1128011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:14:10.011781 1128011 out.go:304] Setting ErrFile to fd 2...
	I0318 14:14:10.011788 1128011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:14:10.012047 1128011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:14:10.012307 1128011 out.go:298] Setting JSON to false
	I0318 14:14:10.012410 1128011 mustload.go:65] Loading cluster: default-k8s-diff-port-075922
	I0318 14:14:10.012753 1128011 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:14:10.012832 1128011 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:14:10.013016 1128011 mustload.go:65] Loading cluster: default-k8s-diff-port-075922
	I0318 14:14:10.013155 1128011 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:14:10.013202 1128011 stop.go:39] StopHost: default-k8s-diff-port-075922
	I0318 14:14:10.013642 1128011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:14:10.013718 1128011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:14:10.029354 1128011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46625
	I0318 14:14:10.029892 1128011 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:14:10.030517 1128011 main.go:141] libmachine: Using API Version  1
	I0318 14:14:10.030543 1128011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:14:10.030907 1128011 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:14:10.033760 1128011 out.go:177] * Stopping node "default-k8s-diff-port-075922"  ...
	I0318 14:14:10.035160 1128011 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 14:14:10.035197 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:14:10.035439 1128011 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 14:14:10.035478 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:14:10.038187 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:14:10.038586 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:12:40 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:14:10.038654 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:14:10.038760 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:14:10.038975 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:14:10.039161 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:14:10.039332 1128011 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:14:10.126789 1128011 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 14:14:10.184040 1128011 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 14:14:10.247307 1128011 main.go:141] libmachine: Stopping "default-k8s-diff-port-075922"...
	I0318 14:14:10.247345 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:14:10.249214 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Stop
	I0318 14:14:10.252754 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 0/120
	I0318 14:14:11.254344 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 1/120
	I0318 14:14:12.255899 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 2/120
	I0318 14:14:13.257520 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 3/120
	I0318 14:14:14.259581 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 4/120
	I0318 14:14:15.261912 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 5/120
	I0318 14:14:16.263607 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 6/120
	I0318 14:14:17.265204 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 7/120
	I0318 14:14:18.266771 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 8/120
	I0318 14:14:19.268307 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 9/120
	I0318 14:14:20.269874 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 10/120
	I0318 14:14:21.271315 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 11/120
	I0318 14:14:22.272798 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 12/120
	I0318 14:14:23.274399 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 13/120
	I0318 14:14:24.275979 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 14/120
	I0318 14:14:25.278361 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 15/120
	I0318 14:14:26.279760 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 16/120
	I0318 14:14:27.281221 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 17/120
	I0318 14:14:28.282616 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 18/120
	I0318 14:14:29.284019 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 19/120
	I0318 14:14:30.286489 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 20/120
	I0318 14:14:31.288144 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 21/120
	I0318 14:14:32.289596 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 22/120
	I0318 14:14:33.291087 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 23/120
	I0318 14:14:34.292394 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 24/120
	I0318 14:14:35.294503 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 25/120
	I0318 14:14:36.296169 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 26/120
	I0318 14:14:37.297722 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 27/120
	I0318 14:14:38.299139 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 28/120
	I0318 14:14:39.300724 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 29/120
	I0318 14:14:40.302818 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 30/120
	I0318 14:14:41.304281 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 31/120
	I0318 14:14:42.305735 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 32/120
	I0318 14:14:43.307178 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 33/120
	I0318 14:14:44.308533 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 34/120
	I0318 14:14:45.310734 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 35/120
	I0318 14:14:46.312306 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 36/120
	I0318 14:14:47.313861 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 37/120
	I0318 14:14:48.315297 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 38/120
	I0318 14:14:49.316832 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 39/120
	I0318 14:14:50.319463 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 40/120
	I0318 14:14:51.320838 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 41/120
	I0318 14:14:52.322496 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 42/120
	I0318 14:14:53.323820 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 43/120
	I0318 14:14:54.325312 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 44/120
	I0318 14:14:55.327359 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 45/120
	I0318 14:14:56.328830 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 46/120
	I0318 14:14:57.330351 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 47/120
	I0318 14:14:58.331765 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 48/120
	I0318 14:14:59.333226 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 49/120
	I0318 14:15:00.335536 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 50/120
	I0318 14:15:01.337000 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 51/120
	I0318 14:15:02.338499 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 52/120
	I0318 14:15:03.340075 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 53/120
	I0318 14:15:04.341562 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 54/120
	I0318 14:15:05.343696 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 55/120
	I0318 14:15:06.345080 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 56/120
	I0318 14:15:07.346530 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 57/120
	I0318 14:15:08.348256 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 58/120
	I0318 14:15:09.350331 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 59/120
	I0318 14:15:10.352660 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 60/120
	I0318 14:15:11.354350 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 61/120
	I0318 14:15:12.355882 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 62/120
	I0318 14:15:13.357308 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 63/120
	I0318 14:15:14.358745 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 64/120
	I0318 14:15:15.361138 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 65/120
	I0318 14:15:16.362617 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 66/120
	I0318 14:15:17.364053 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 67/120
	I0318 14:15:18.365457 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 68/120
	I0318 14:15:19.366778 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 69/120
	I0318 14:15:20.369337 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 70/120
	I0318 14:15:21.370758 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 71/120
	I0318 14:15:22.372201 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 72/120
	I0318 14:15:23.373581 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 73/120
	I0318 14:15:24.375153 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 74/120
	I0318 14:15:25.376898 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 75/120
	I0318 14:15:26.378354 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 76/120
	I0318 14:15:27.379813 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 77/120
	I0318 14:15:28.381178 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 78/120
	I0318 14:15:29.382755 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 79/120
	I0318 14:15:30.385040 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 80/120
	I0318 14:15:31.386666 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 81/120
	I0318 14:15:32.388161 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 82/120
	I0318 14:15:33.389590 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 83/120
	I0318 14:15:34.390993 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 84/120
	I0318 14:15:35.393166 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 85/120
	I0318 14:15:36.394951 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 86/120
	I0318 14:15:37.396490 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 87/120
	I0318 14:15:38.397936 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 88/120
	I0318 14:15:39.399439 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 89/120
	I0318 14:15:40.400872 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 90/120
	I0318 14:15:41.402541 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 91/120
	I0318 14:15:42.404210 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 92/120
	I0318 14:15:43.405934 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 93/120
	I0318 14:15:44.407504 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 94/120
	I0318 14:15:45.409715 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 95/120
	I0318 14:15:46.411423 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 96/120
	I0318 14:15:47.412899 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 97/120
	I0318 14:15:48.414340 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 98/120
	I0318 14:15:49.415905 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 99/120
	I0318 14:15:50.417159 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 100/120
	I0318 14:15:51.418505 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 101/120
	I0318 14:15:52.419935 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 102/120
	I0318 14:15:53.421288 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 103/120
	I0318 14:15:54.423087 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 104/120
	I0318 14:15:55.425257 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 105/120
	I0318 14:15:56.426786 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 106/120
	I0318 14:15:57.428405 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 107/120
	I0318 14:15:58.429820 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 108/120
	I0318 14:15:59.431189 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 109/120
	I0318 14:16:00.433652 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 110/120
	I0318 14:16:01.435109 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 111/120
	I0318 14:16:02.436505 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 112/120
	I0318 14:16:03.438009 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 113/120
	I0318 14:16:04.439279 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 114/120
	I0318 14:16:05.441364 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 115/120
	I0318 14:16:06.442725 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 116/120
	I0318 14:16:07.444401 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 117/120
	I0318 14:16:08.446120 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 118/120
	I0318 14:16:09.447524 1128011 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for machine to stop 119/120
	I0318 14:16:10.449063 1128011 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 14:16:10.449133 1128011 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 14:16:10.451086 1128011 out.go:177] 
	W0318 14:16:10.452502 1128011 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 14:16:10.452522 1128011 out.go:239] * 
	* 
	W0318 14:16:10.457079 1128011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:16:10.458528 1128011 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-075922 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922: exit status 3 (18.439998683s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:16:28.900214 1128687 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host
	E0318 14:16:28.900236 1128687 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-075922" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-782728 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-782728 create -f testdata/busybox.yaml: exit status 1 (46.934567ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782728" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-782728 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 6 (234.389904ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:15:26.410266 1128316 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-782728" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 6 (228.768228ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:15:26.639910 1128346 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-782728" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-782728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0318 14:15:32.237382 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.242749 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.253098 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.273493 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.313849 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.394488 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.554981 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:32.876115 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:33.228735 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:33.517049 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:34.797626 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:37.357983 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:42.478605 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-782728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m51.57125772s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-782728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-782728 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-782728 describe deploy/metrics-server -n kube-system: exit status 1 (46.234517ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782728" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-782728 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 6 (235.832835ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:17:18.492952 1129139 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-782728" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109: exit status 3 (3.199635631s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:15:46.532189 1128452 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host
	E0318 14:15:46.532209 1128452 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-188109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-188109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155394177s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-188109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109
E0318 14:15:52.719132 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:15:53.709878 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:15:53.990439 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109: exit status 3 (3.060381012s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:15:55.748330 1128553 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host
	E0318 14:15:55.748351 1128553 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.40:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-188109" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719: exit status 3 (3.199469274s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:16:11.364241 1128657 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host
	E0318 14:16:11.364265 1128657 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0318 14:16:13.199549 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:16:15.724938 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:15.730245 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:15.740516 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:15.760790 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:15.801139 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:15.881491 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:16.041972 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:16.362623 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:17.003650 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154727806s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719
E0318 14:16:18.284111 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719: exit status 3 (3.060801901s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:16:20.580202 1128758 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host
	E0318 14:16:20.580226 1128758 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.45:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-767719" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
E0318 14:16:30.686240 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922: exit status 3 (3.199743546s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:16:32.100234 1128863 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host
	E0318 14:16:32.100330 1128863 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-075922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0318 14:16:34.670816 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:16:35.806446 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:36.207013 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-075922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155790242s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-075922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922: exit status 3 (3.059705886s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 14:16:41.316316 1128934 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host
	E0318 14:16:41.316345 1128934 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.39:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-075922" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (747.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-782728 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0318 14:17:37.319313 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 14:17:37.647892 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:17:47.488797 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:17:56.592077 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:18:10.146787 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:18:16.081314 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:18:18.302010 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:18:37.830852 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:18:45.986569 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:18:59.568472 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:19:00.517880 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:19:09.409227 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:19:17.918581 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 14:19:28.203083 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:20:12.747538 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:20:32.237235 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:20:40.372006 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 14:20:40.433206 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:20:40.965360 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 14:20:59.922126 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:21:15.725074 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:21:25.564611 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:21:43.409209 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:21:53.249433 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:22:37.318690 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 14:23:10.146726 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:23:18.302481 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:24:00.517764 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:24:17.919164 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 14:25:12.748549 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
E0318 14:25:32.237303 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:26:15.724947 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-782728 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m23.847212406s)

                                                
                                                
-- stdout --
	* [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	* 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	* 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-782728 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (279.529797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-782728 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-782728 logs -n 25: (1.625777497s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo find                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo find                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:17:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:24.228169 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:17:27.300185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:33.380164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:36.452102 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:42.536087 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:45.604211 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:51.684168 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:54.756227 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:00.836108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:03.908246 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:09.988223 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:13.060123 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:19.140179 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:22.212209 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:28.292206 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:31.364121 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:37.444195 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:40.516108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:46.596160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:49.668120 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:55.748134 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:58.820202 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:04.900183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:07.972128 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:14.052140 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:17.124242 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:23.204175 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:26.276172 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:32.356183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:35.428256 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:41.508181 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:44.580142 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:50.660193 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:53.732160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:59.812151 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:02.884164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:08.964174 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:12.036185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:18.116178 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:21.188147 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:27.268137 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:30.340177 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:33.345074 1128788 start.go:364] duration metric: took 4m12.599457373s to acquireMachinesLock for "embed-certs-767719"
	I0318 14:20:33.345136 1128788 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:33.345145 1128788 fix.go:54] fixHost starting: 
	I0318 14:20:33.345584 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:33.345638 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:33.362007 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0318 14:20:33.362504 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:33.363014 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:20:33.363037 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:33.363432 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:33.363634 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:33.363787 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:20:33.365593 1128788 fix.go:112] recreateIfNeeded on embed-certs-767719: state=Stopped err=<nil>
	I0318 14:20:33.365619 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	W0318 14:20:33.365792 1128788 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:33.367525 1128788 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767719" ...
	I0318 14:20:33.368930 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Start
	I0318 14:20:33.369145 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring networks are active...
	I0318 14:20:33.370041 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network default is active
	I0318 14:20:33.370474 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network mk-embed-certs-767719 is active
	I0318 14:20:33.370832 1128788 main.go:141] libmachine: (embed-certs-767719) Getting domain xml...
	I0318 14:20:33.371609 1128788 main.go:141] libmachine: (embed-certs-767719) Creating domain...
	I0318 14:20:34.596425 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting to get IP...
	I0318 14:20:34.597292 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.597677 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.597753 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.597666 1130210 retry.go:31] will retry after 244.312377ms: waiting for machine to come up
	I0318 14:20:34.843360 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.844039 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.844082 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.843988 1130210 retry.go:31] will retry after 388.782007ms: waiting for machine to come up
	I0318 14:20:35.234931 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.235304 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.235334 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.235252 1130210 retry.go:31] will retry after 449.871291ms: waiting for machine to come up
	I0318 14:20:33.342334 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:33.342408 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.342790 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:20:33.342823 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.343061 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:20:33.344920 1128583 machine.go:97] duration metric: took 4m37.408911801s to provisionDockerMachine
	I0318 14:20:33.344982 1128583 fix.go:56] duration metric: took 4m37.431584024s for fixHost
	I0318 14:20:33.344992 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 4m37.431613044s
	W0318 14:20:33.345017 1128583 start.go:713] error starting host: provision: host is not running
	W0318 14:20:33.345209 1128583 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 14:20:33.345223 1128583 start.go:728] Will try again in 5 seconds ...
	I0318 14:20:35.687048 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.687565 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.687604 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.687508 1130210 retry.go:31] will retry after 470.225551ms: waiting for machine to come up
	I0318 14:20:36.159138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.159642 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.159668 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.159590 1130210 retry.go:31] will retry after 638.634635ms: waiting for machine to come up
	I0318 14:20:36.799431 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.799820 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.799857 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.799764 1130210 retry.go:31] will retry after 758.659569ms: waiting for machine to come up
	I0318 14:20:37.559752 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:37.560189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:37.560224 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:37.560116 1130210 retry.go:31] will retry after 1.163344023s: waiting for machine to come up
	I0318 14:20:38.724981 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:38.725498 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:38.725561 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:38.725341 1130210 retry.go:31] will retry after 1.155934539s: waiting for machine to come up
	I0318 14:20:39.882622 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:39.883025 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:39.883074 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:39.882966 1130210 retry.go:31] will retry after 1.832023161s: waiting for machine to come up
	I0318 14:20:38.347296 1128583 start.go:360] acquireMachinesLock for no-preload-188109: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:20:41.717138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:41.717723 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:41.717757 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:41.717642 1130210 retry.go:31] will retry after 1.526824443s: waiting for machine to come up
	I0318 14:20:43.246389 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:43.246960 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:43.246997 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:43.246901 1130210 retry.go:31] will retry after 2.608273558s: waiting for machine to come up
	I0318 14:20:45.858375 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:45.858919 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:45.858943 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:45.858871 1130210 retry.go:31] will retry after 2.272908905s: waiting for machine to come up
	I0318 14:20:48.134345 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:48.134774 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:48.134826 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:48.134739 1130210 retry.go:31] will retry after 3.671073699s: waiting for machine to come up
	I0318 14:20:53.273198 1128964 start.go:364] duration metric: took 4m11.791347901s to acquireMachinesLock for "default-k8s-diff-port-075922"
	I0318 14:20:53.273284 1128964 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:53.273295 1128964 fix.go:54] fixHost starting: 
	I0318 14:20:53.273834 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:53.273879 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:53.291440 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0318 14:20:53.291988 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:53.292571 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:20:53.292605 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:53.292931 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:53.293125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:20:53.293278 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:20:53.294856 1128964 fix.go:112] recreateIfNeeded on default-k8s-diff-port-075922: state=Stopped err=<nil>
	I0318 14:20:53.294889 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	W0318 14:20:53.295063 1128964 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:53.297784 1128964 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075922" ...
	I0318 14:20:51.809859 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.810477 1128788 main.go:141] libmachine: (embed-certs-767719) Found IP for machine: 192.168.72.45
	I0318 14:20:51.810503 1128788 main.go:141] libmachine: (embed-certs-767719) Reserving static IP address...
	I0318 14:20:51.810518 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has current primary IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.811061 1128788 main.go:141] libmachine: (embed-certs-767719) Reserved static IP address: 192.168.72.45
	I0318 14:20:51.811104 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.811112 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting for SSH to be available...
	I0318 14:20:51.811137 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | skip adding static IP to network mk-embed-certs-767719 - found existing host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"}
	I0318 14:20:51.811163 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Getting to WaitForSSH function...
	I0318 14:20:51.813739 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814076 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.814121 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH client type: external
	I0318 14:20:51.814225 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa (-rw-------)
	I0318 14:20:51.814282 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:20:51.814327 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | About to run SSH command:
	I0318 14:20:51.814346 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | exit 0
	I0318 14:20:51.944192 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | SSH cmd err, output: <nil>: 
	I0318 14:20:51.944624 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetConfigRaw
	I0318 14:20:51.945477 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:51.948244 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.948667 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.948711 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.949069 1128788 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:20:51.949305 1128788 machine.go:94] provisionDockerMachine start ...
	I0318 14:20:51.949327 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:51.949596 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:51.952267 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952653 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.952703 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952836 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:51.953047 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953200 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953376 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:51.953525 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:51.953772 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:51.953785 1128788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:20:52.068806 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:20:52.068847 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069162 1128788 buildroot.go:166] provisioning hostname "embed-certs-767719"
	I0318 14:20:52.069198 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069500 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.072258 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072750 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.072785 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072939 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.073146 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073312 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073492 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.073730 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.073916 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.073934 1128788 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767719 && echo "embed-certs-767719" | sudo tee /etc/hostname
	I0318 14:20:52.204197 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767719
	
	I0318 14:20:52.204258 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.207520 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.207927 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.207959 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.208178 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.208478 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208740 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208961 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.209164 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.209352 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.209370 1128788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767719/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:20:52.337185 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:52.337220 1128788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:20:52.337243 1128788 buildroot.go:174] setting up certificates
	I0318 14:20:52.337253 1128788 provision.go:84] configureAuth start
	I0318 14:20:52.337264 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.337561 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:52.340693 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341061 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.341098 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341280 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.343239 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343570 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.343595 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343709 1128788 provision.go:143] copyHostCerts
	I0318 14:20:52.343782 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:20:52.343794 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:20:52.343888 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:20:52.344001 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:20:52.344010 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:20:52.344038 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:20:52.344095 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:20:52.344103 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:20:52.344126 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:20:52.344220 1128788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767719 san=[127.0.0.1 192.168.72.45 embed-certs-767719 localhost minikube]
	I0318 14:20:52.550241 1128788 provision.go:177] copyRemoteCerts
	I0318 14:20:52.550380 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:20:52.550433 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.553182 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553591 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.553626 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553824 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.554056 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.554241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.554392 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:52.645341 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:20:52.672476 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:20:52.698609 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:20:52.724434 1128788 provision.go:87] duration metric: took 387.165868ms to configureAuth
	I0318 14:20:52.724471 1128788 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:20:52.724727 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:20:52.724827 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.727323 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727700 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.727764 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727882 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.728098 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728443 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.728626 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.728859 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.728878 1128788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:20:53.012918 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:20:53.012959 1128788 machine.go:97] duration metric: took 1.063639009s to provisionDockerMachine
	I0318 14:20:53.012976 1128788 start.go:293] postStartSetup for "embed-certs-767719" (driver="kvm2")
	I0318 14:20:53.012990 1128788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:20:53.013039 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.013471 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:20:53.013505 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.016524 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.016929 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.016961 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.017153 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.017372 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.017582 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.017846 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.107977 1128788 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:20:53.113146 1128788 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:20:53.113184 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:20:53.113302 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:20:53.113423 1128788 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:20:53.113558 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:20:53.125166 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:53.152094 1128788 start.go:296] duration metric: took 139.099686ms for postStartSetup
	I0318 14:20:53.152147 1128788 fix.go:56] duration metric: took 19.807001958s for fixHost
	I0318 14:20:53.152194 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.155058 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155371 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.155401 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155643 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.155908 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156138 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156307 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.156536 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:53.156770 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:53.156786 1128788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:20:53.272998 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771653.240528844
	
	I0318 14:20:53.273029 1128788 fix.go:216] guest clock: 1710771653.240528844
	I0318 14:20:53.273046 1128788 fix.go:229] Guest: 2024-03-18 14:20:53.240528844 +0000 UTC Remote: 2024-03-18 14:20:53.15215228 +0000 UTC m=+272.563569050 (delta=88.376564ms)
	I0318 14:20:53.273075 1128788 fix.go:200] guest clock delta is within tolerance: 88.376564ms
	I0318 14:20:53.273083 1128788 start.go:83] releasing machines lock for "embed-certs-767719", held for 19.927965733s
	I0318 14:20:53.273118 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.273431 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:53.276309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276740 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.276768 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276958 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277493 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277716 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277806 1128788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:20:53.277851 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.277976 1128788 ssh_runner.go:195] Run: cat /version.json
	I0318 14:20:53.278002 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.280799 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.280853 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281234 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281263 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281289 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281518 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281616 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281767 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281850 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281945 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282028 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282090 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.282179 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.386584 1128788 ssh_runner.go:195] Run: systemctl --version
	I0318 14:20:53.393371 1128788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:20:53.547565 1128788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:20:53.554182 1128788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:20:53.554266 1128788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:20:53.573031 1128788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:20:53.573071 1128788 start.go:494] detecting cgroup driver to use...
	I0318 14:20:53.573197 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:20:53.591649 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:20:53.607279 1128788 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:20:53.607359 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:20:53.624327 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:20:53.640398 1128788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:20:53.759979 1128788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:20:53.931294 1128788 docker.go:233] disabling docker service ...
	I0318 14:20:53.931381 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:20:53.954433 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:20:53.969396 1128788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:20:54.107898 1128788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:20:54.241874 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:20:54.257748 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:20:54.278981 1128788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:20:54.279057 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.293329 1128788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:20:54.293390 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.304838 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.316646 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.328623 1128788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:20:54.340540 1128788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:20:54.352368 1128788 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:20:54.352433 1128788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:20:54.368965 1128788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:20:54.389268 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:54.511182 1128788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:20:54.657685 1128788 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:20:54.657798 1128788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:20:54.663591 1128788 start.go:562] Will wait 60s for crictl version
	I0318 14:20:54.663670 1128788 ssh_runner.go:195] Run: which crictl
	I0318 14:20:54.667903 1128788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:20:54.707961 1128788 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:20:54.708065 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.738240 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.773562 1128788 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:20:54.775286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:54.778784 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779228 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:54.779265 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779498 1128788 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:20:54.784575 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:54.799207 1128788 kubeadm.go:877] updating cluster {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:20:54.799380 1128788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:20:54.799440 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:54.839309 1128788 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:20:54.839387 1128788 ssh_runner.go:195] Run: which lz4
	I0318 14:20:54.844323 1128788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:20:54.850487 1128788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:20:54.850524 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:20:53.299380 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Start
	I0318 14:20:53.299595 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring networks are active...
	I0318 14:20:53.300497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network default is active
	I0318 14:20:53.300887 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network mk-default-k8s-diff-port-075922 is active
	I0318 14:20:53.301316 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Getting domain xml...
	I0318 14:20:53.302079 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Creating domain...
	I0318 14:20:54.607619 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting to get IP...
	I0318 14:20:54.608510 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609075 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.609050 1130331 retry.go:31] will retry after 282.377323ms: waiting for machine to come up
	I0318 14:20:54.892766 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893323 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.893259 1130331 retry.go:31] will retry after 264.840581ms: waiting for machine to come up
	I0318 14:20:55.160018 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160536 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160578 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.160460 1130331 retry.go:31] will retry after 402.458985ms: waiting for machine to come up
	I0318 14:20:55.564282 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564773 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.564727 1130331 retry.go:31] will retry after 382.70672ms: waiting for machine to come up
	I0318 14:20:55.949676 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950183 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950218 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.950122 1130331 retry.go:31] will retry after 676.466466ms: waiting for machine to come up
	I0318 14:20:56.798325 1128788 crio.go:444] duration metric: took 1.954051074s to copy over tarball
	I0318 14:20:56.798418 1128788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:20:59.431722 1128788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633260911s)
	I0318 14:20:59.431777 1128788 crio.go:451] duration metric: took 2.633417573s to extract the tarball
	I0318 14:20:59.431788 1128788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:20:59.476265 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:59.534130 1128788 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:20:59.534161 1128788 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:20:59.534173 1128788 kubeadm.go:928] updating node { 192.168.72.45 8443 v1.28.4 crio true true} ...
	I0318 14:20:59.534357 1128788 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:20:59.534499 1128788 ssh_runner.go:195] Run: crio config
	I0318 14:20:59.594778 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:20:59.594814 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:20:59.594831 1128788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:20:59.594894 1128788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767719 NodeName:embed-certs-767719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:20:59.595092 1128788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:20:59.595203 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:20:59.610298 1128788 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:20:59.610388 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:20:59.624050 1128788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 14:20:59.644283 1128788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:20:59.663987 1128788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0318 14:20:59.685379 1128788 ssh_runner.go:195] Run: grep 192.168.72.45	control-plane.minikube.internal$ /etc/hosts
	I0318 14:20:59.690360 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:59.705657 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:59.839158 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:20:59.857617 1128788 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719 for IP: 192.168.72.45
	I0318 14:20:59.857642 1128788 certs.go:194] generating shared ca certs ...
	I0318 14:20:59.857674 1128788 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:20:59.857839 1128788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:20:59.857882 1128788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:20:59.857893 1128788 certs.go:256] generating profile certs ...
	I0318 14:20:59.858006 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/client.key
	I0318 14:20:59.858061 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key.f59f641c
	I0318 14:20:59.858098 1128788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key
	I0318 14:20:59.858268 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:20:59.858301 1128788 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:20:59.858308 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:20:59.858331 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:20:59.858360 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:20:59.858382 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:20:59.858424 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:59.859110 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:20:59.901101 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:20:59.947010 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:20:59.990882 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:00.032358 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 14:21:00.070194 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:00.108670 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:00.137760 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:00.168481 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:00.199292 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:00.228315 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:00.257409 1128788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:00.277720 1128788 ssh_runner.go:195] Run: openssl version
	I0318 14:21:00.284138 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:00.296443 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302083 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302160 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.308748 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:00.322025 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:00.334654 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340319 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340404 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.347454 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:00.359627 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:00.371865 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377236 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377335 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.387041 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:00.404525 1128788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:00.412919 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:00.422577 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:00.434217 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:00.444535 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:00.452863 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:00.459979 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:00.467503 1128788 kubeadm.go:391] StartCluster: {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:00.467680 1128788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:00.467780 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.507833 1128788 cri.go:89] found id: ""
	I0318 14:21:00.507926 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:00.519958 1128788 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:00.519982 1128788 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:00.520011 1128788 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:00.520066 1128788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:00.532229 1128788 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:00.533479 1128788 kubeconfig.go:125] found "embed-certs-767719" server: "https://192.168.72.45:8443"
	I0318 14:21:00.536185 1128788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:00.548434 1128788 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.45
	I0318 14:21:00.548484 1128788 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:00.548498 1128788 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:00.548551 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.592096 1128788 cri.go:89] found id: ""
	I0318 14:21:00.592168 1128788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:00.610826 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:00.622294 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:00.622330 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:00.622386 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:00.633009 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:00.633089 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:20:56.628134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628708 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628747 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:56.628643 1130331 retry.go:31] will retry after 703.45784ms: waiting for machine to come up
	I0318 14:20:57.334203 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334666 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334702 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:57.334600 1130331 retry.go:31] will retry after 1.177266521s: waiting for machine to come up
	I0318 14:20:58.513803 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514452 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514485 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:58.514389 1130331 retry.go:31] will retry after 1.389627955s: waiting for machine to come up
	I0318 14:20:59.906109 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906663 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906750 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:59.906632 1130331 retry.go:31] will retry after 1.239662517s: waiting for machine to come up
	I0318 14:21:01.147929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148325 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:01.148248 1130331 retry.go:31] will retry after 2.183067358s: waiting for machine to come up
	I0318 14:21:00.644684 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:00.921213 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:00.921307 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:00.932412 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.943408 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:00.943481 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.955574 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:00.966416 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:00.966483 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:00.978014 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:00.993622 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:01.128726 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.331974 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.203164646s)
	I0318 14:21:02.332035 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.574592 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.686011 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.821189 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:02.821373 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.322200 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.822207 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.838586 1128788 api_server.go:72] duration metric: took 1.017395673s to wait for apiserver process to appear ...
	I0318 14:21:03.838622 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:03.838660 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.839282 1128788 api_server.go:269] stopped: https://192.168.72.45:8443/healthz: Get "https://192.168.72.45:8443/healthz": dial tcp 192.168.72.45:8443: connect: connection refused
	I0318 14:21:04.339675 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.333080 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333620 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333648 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:03.333583 1130331 retry.go:31] will retry after 2.259124316s: waiting for machine to come up
	I0318 14:21:05.594356 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594823 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:05.594754 1130331 retry.go:31] will retry after 2.492274875s: waiting for machine to come up
	I0318 14:21:07.054330 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:07.054373 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:07.054392 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.073841 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.073894 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.339285 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.345307 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.345340 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.838915 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.846722 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.846759 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:08.339409 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:08.344790 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:21:08.358050 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:08.358097 1128788 api_server.go:131] duration metric: took 4.519466088s to wait for apiserver health ...
	I0318 14:21:08.358121 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:21:08.358130 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:08.359982 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:08.361428 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:08.378195 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:08.409269 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:08.421874 1128788 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:08.421960 1128788 system_pods.go:61] "coredns-5dd5756b68-4dmw2" [324897fc-dd26-47f1-b8bc-4d2ed721a576] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:08.421971 1128788 system_pods.go:61] "etcd-embed-certs-767719" [df147cb8-989c-408d-ade8-547858a8c2bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:08.421982 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [82f7d170-3b3c-448c-b824-6d263c5c1128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:08.421989 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [cd4dd4f3-a727-4864-b0e9-a89758537de9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:08.422002 1128788 system_pods.go:61] "kube-proxy-mtx9w" [b46b48ff-e4c0-4595-82c4-7c0c86103262] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:08.422010 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [63774f42-c85e-467f-9bd3-0c78d44b2681] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:08.422022 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-jr9wp" [e40748e2-ebc3-4c4f-a9cc-01bbc7416f35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:08.422030 1128788 system_pods.go:61] "storage-provisioner" [1b51e6a7-2693-4d0b-b47e-ccbcb1e46424] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:08.422047 1128788 system_pods.go:74] duration metric: took 12.746875ms to wait for pod list to return data ...
	I0318 14:21:08.422058 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:08.432361 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:08.432461 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:08.432483 1128788 node_conditions.go:105] duration metric: took 10.415127ms to run NodePressure ...
	I0318 14:21:08.432524 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:08.730544 1128788 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:08.735970 1128788 kubeadm.go:733] kubelet initialised
	I0318 14:21:08.736001 1128788 kubeadm.go:734] duration metric: took 5.422027ms waiting for restarted kubelet to initialise ...
	I0318 14:21:08.736042 1128788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:08.745586 1128788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:08.090446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090834 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:08.090779 1130331 retry.go:31] will retry after 3.31085892s: waiting for machine to come up
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:11.405935 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has current primary IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406539 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Found IP for machine: 192.168.83.39
	I0318 14:21:11.406553 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserving static IP address...
	I0318 14:21:11.407015 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.407048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | skip adding static IP to network mk-default-k8s-diff-port-075922 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"}
	I0318 14:21:11.407066 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserved static IP address: 192.168.83.39
	I0318 14:21:11.407081 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for SSH to be available...
	I0318 14:21:11.407093 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Getting to WaitForSSH function...
	I0318 14:21:11.409327 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409674 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.409706 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409895 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH client type: external
	I0318 14:21:11.409919 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa (-rw-------)
	I0318 14:21:11.410034 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:11.410065 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | About to run SSH command:
	I0318 14:21:11.410089 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | exit 0
	I0318 14:21:11.544258 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:11.544698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetConfigRaw
	I0318 14:21:11.545370 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.548333 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.548729 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.548764 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.549053 1128964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:21:11.549275 1128964 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:11.549295 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:11.549533 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.551799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552156 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.552186 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552280 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.552482 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552657 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552797 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.552994 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.553261 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.553278 1128964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:11.665093 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:11.665132 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665456 1128964 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075922"
	I0318 14:21:11.665493 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665730 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.668911 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.669413 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669679 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.669923 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670319 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.670530 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.670718 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.670734 1128964 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075922 && echo "default-k8s-diff-port-075922" | sudo tee /etc/hostname
	I0318 14:21:11.807520 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075922
	
	I0318 14:21:11.807552 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.810614 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811011 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.811047 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811257 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.811480 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811941 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.812155 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.812361 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.812387 1128964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075922/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:11.942984 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:11.943022 1128964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:11.943078 1128964 buildroot.go:174] setting up certificates
	I0318 14:21:11.943094 1128964 provision.go:84] configureAuth start
	I0318 14:21:11.943108 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.943441 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.946659 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947091 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.947125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947328 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.949852 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950275 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.950310 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950496 1128964 provision.go:143] copyHostCerts
	I0318 14:21:11.950579 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:11.950596 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:11.950679 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:11.950859 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:11.950873 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:11.950898 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:11.950964 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:11.950971 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:11.950988 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:11.951041 1128964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075922 san=[127.0.0.1 192.168.83.39 default-k8s-diff-port-075922 localhost minikube]
	I0318 14:21:12.019678 1128964 provision.go:177] copyRemoteCerts
	I0318 14:21:12.019756 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:12.019788 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.023122 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023603 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.023639 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023862 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.024077 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.024294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.024445 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.112914 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:12.142575 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 14:21:12.171747 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:12.200144 1128964 provision.go:87] duration metric: took 257.034667ms to configureAuth
	I0318 14:21:12.200177 1128964 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:12.200401 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:21:12.200515 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.203573 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.203978 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.204019 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.204160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.204379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204658 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.205131 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.205335 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.205367 1128964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:12.494965 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:12.494997 1128964 machine.go:97] duration metric: took 945.707691ms to provisionDockerMachine
	I0318 14:21:12.495012 1128964 start.go:293] postStartSetup for "default-k8s-diff-port-075922" (driver="kvm2")
	I0318 14:21:12.495026 1128964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:12.495048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.495450 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:12.495486 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.498444 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.498821 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498928 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.499166 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.499363 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.499560 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.588350 1128964 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:12.593611 1128964 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:12.593638 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:12.593714 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:12.593788 1128964 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:12.593875 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:12.605751 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:12.633577 1128964 start.go:296] duration metric: took 138.54984ms for postStartSetup
	I0318 14:21:12.633621 1128964 fix.go:56] duration metric: took 19.360327899s for fixHost
	I0318 14:21:12.633645 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.636446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636822 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.636850 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636989 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.637237 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637428 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637596 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.637786 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.637988 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.638002 1128964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:12.749326 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771672.727120819
	
	I0318 14:21:12.749355 1128964 fix.go:216] guest clock: 1710771672.727120819
	I0318 14:21:12.749364 1128964 fix.go:229] Guest: 2024-03-18 14:21:12.727120819 +0000 UTC Remote: 2024-03-18 14:21:12.633625447 +0000 UTC m=+271.308784721 (delta=93.495372ms)
	I0318 14:21:12.749386 1128964 fix.go:200] guest clock delta is within tolerance: 93.495372ms
	I0318 14:21:12.749392 1128964 start.go:83] releasing machines lock for "default-k8s-diff-port-075922", held for 19.476136638s
	I0318 14:21:12.749418 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.749732 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:12.752996 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753471 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.753506 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753815 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754448 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754651 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754744 1128964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:12.754791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.754943 1128964 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:12.754970 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.758153 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758303 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758628 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758660 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758694 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758758 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758927 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758988 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759057 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759157 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759251 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759292 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.759371 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.841423 1128964 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:12.868154 1128964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:13.020652 1128964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:13.028168 1128964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:13.028267 1128964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:13.047225 1128964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:13.047264 1128964 start.go:494] detecting cgroup driver to use...
	I0318 14:21:13.047361 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:13.064518 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:13.080271 1128964 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:13.080356 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:13.095583 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:13.110387 1128964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:13.250934 1128964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:13.450657 1128964 docker.go:233] disabling docker service ...
	I0318 14:21:13.450738 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:13.471701 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:13.488157 1128964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:13.644961 1128964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:13.811333 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:13.828584 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:13.852476 1128964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:13.852557 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.864849 1128964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:13.864951 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.877723 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.890337 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.902558 1128964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:13.915858 1128964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:13.928426 1128964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:13.928526 1128964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:13.951761 1128964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:13.964785 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:14.144432 1128964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:14.311928 1128964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:14.312078 1128964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:14.319279 1128964 start.go:562] Will wait 60s for crictl version
	I0318 14:21:14.319347 1128964 ssh_runner.go:195] Run: which crictl
	I0318 14:21:14.325325 1128964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:14.385244 1128964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:14.385344 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.426242 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.460725 1128964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:21:10.753176 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:12.756558 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:13.760252 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:13.760295 1128788 pod_ready.go:81] duration metric: took 5.014671723s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:13.760315 1128788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:14.461974 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:14.465116 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465652 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:14.465696 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465903 1128964 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:14.471039 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:14.486098 1128964 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:14.486307 1128964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:21:14.486379 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:14.526373 1128964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:21:14.526476 1128964 ssh_runner.go:195] Run: which lz4
	I0318 14:21:14.531145 1128964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:14.536370 1128964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:14.536412 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:21:15.769517 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:17.772721 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:18.769552 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:18.769590 1128788 pod_ready.go:81] duration metric: took 5.009265127s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:18.769610 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:16.505094 1128964 crio.go:444] duration metric: took 1.973996063s to copy over tarball
	I0318 14:21:16.505250 1128964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:19.251009 1128964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.745717226s)
	I0318 14:21:19.251045 1128964 crio.go:451] duration metric: took 2.745895394s to extract the tarball
	I0318 14:21:19.251053 1128964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:19.308392 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:19.363143 1128964 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:21:19.363172 1128964 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:21:19.363181 1128964 kubeadm.go:928] updating node { 192.168.83.39 8444 v1.28.4 crio true true} ...
	I0318 14:21:19.363313 1128964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-075922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:19.363415 1128964 ssh_runner.go:195] Run: crio config
	I0318 14:21:19.415995 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:19.416028 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:19.416048 1128964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:19.416085 1128964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075922 NodeName:default-k8s-diff-port-075922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:21:19.416297 1128964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-075922"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:19.416379 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:21:19.427340 1128964 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:19.427420 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:19.438470 1128964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0318 14:21:19.459945 1128964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:19.479728 1128964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0318 14:21:19.500079 1128964 ssh_runner.go:195] Run: grep 192.168.83.39	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:19.504746 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:19.519931 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:19.654822 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:19.675414 1128964 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922 for IP: 192.168.83.39
	I0318 14:21:19.675443 1128964 certs.go:194] generating shared ca certs ...
	I0318 14:21:19.675462 1128964 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:19.675647 1128964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:19.675707 1128964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:19.675722 1128964 certs.go:256] generating profile certs ...
	I0318 14:21:19.675861 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/client.key
	I0318 14:21:19.683399 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key.675162fd
	I0318 14:21:19.683522 1128964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key
	I0318 14:21:19.683667 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:19.683715 1128964 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:19.683730 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:19.683782 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:19.683870 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:19.683897 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:19.683940 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:19.684679 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:19.743065 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:19.787963 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:19.833491 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:19.865359 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 14:21:19.903294 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:19.932298 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:19.961860 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 14:21:19.992150 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:20.020750 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:20.047780 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:20.074566 1128964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:20.094524 1128964 ssh_runner.go:195] Run: openssl version
	I0318 14:21:20.101181 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:20.118970 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124628 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124707 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.133462 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:20.150447 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:20.165864 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173488 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173627 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.183147 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:20.200417 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:20.213973 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219407 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219488 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.226491 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:20.240299 1128964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:20.245960 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:20.253073 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:20.260144 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:20.267546 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:20.274740 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:20.282502 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:20.289722 1128964 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:20.289817 1128964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:20.289877 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.338941 1128964 cri.go:89] found id: ""
	I0318 14:21:20.339036 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:20.350677 1128964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:20.350706 1128964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:20.350718 1128964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:20.350775 1128964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:20.362216 1128964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:20.363622 1128964 kubeconfig.go:125] found "default-k8s-diff-port-075922" server: "https://192.168.83.39:8444"
	I0318 14:21:20.366606 1128964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:20.379417 1128964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.39
	I0318 14:21:20.379460 1128964 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:20.379481 1128964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:20.379556 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.423139 1128964 cri.go:89] found id: ""
	I0318 14:21:20.423224 1128964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:20.444111 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:20.456698 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:20.456725 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:20.456787 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:21:20.467432 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:20.467502 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:20.478894 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:21:20.490123 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:20.490216 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:20.501744 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.514020 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:20.514084 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.526805 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:21:20.538374 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:20.538452 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:20.550880 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:20.562302 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:20.687288 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.085960 1128788 pod_ready.go:102] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:21.781260 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.781287 1128788 pod_ready.go:81] duration metric: took 3.011668835s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.781297 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789501 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.789537 1128788 pod_ready.go:81] duration metric: took 8.231402ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789552 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797445 1128788 pod_ready.go:92] pod "kube-proxy-mtx9w" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.797483 1128788 pod_ready.go:81] duration metric: took 7.921289ms for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797496 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804084 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.804120 1128788 pod_ready.go:81] duration metric: took 6.613559ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804132 1128788 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:23.812751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:21.432193 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.821781 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.899411 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.984494 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:21.984624 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.484985 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.985119 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:23.009700 1128964 api_server.go:72] duration metric: took 1.025195346s to wait for apiserver process to appear ...
	I0318 14:21:23.009739 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:23.009764 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:23.010328 1128964 api_server.go:269] stopped: https://192.168.83.39:8444/healthz: Get "https://192.168.83.39:8444/healthz": dial tcp 192.168.83.39:8444: connect: connection refused
	I0318 14:21:23.510606 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.307173 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.307217 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.307238 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.345507 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.345551 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.510350 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.515684 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:26.515721 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.010509 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.015492 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:27.015526 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.510772 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.520209 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:21:27.527945 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:27.527978 1128964 api_server.go:131] duration metric: took 4.518232257s to wait for apiserver health ...
	I0318 14:21:27.527988 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:27.527994 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:27.529779 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:26.313296 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:28.811916 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:27.531259 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:27.573198 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:27.619813 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:27.629766 1128964 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:27.629805 1128964 system_pods.go:61] "coredns-5dd5756b68-dsrcd" [86ac331d-2539-4fbb-8cf8-56f58afa6f99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:27.629815 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [0de3bd3b-6ee2-46e2-83f7-7c637115879f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:27.629821 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [e1e689c8-642c-428e-bddf-43c2c1524563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:27.629832 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [1a200d0f-53e6-4e44-a8b0-28b9d21f763e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:27.629837 1128964 system_pods.go:61] "kube-proxy-wbnvd" [6bf13050-a150-4133-93e2-71ddcad443ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:27.629842 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [87bc17b3-75c6-4d6b-9b8f-29823398100a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:27.629847 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-4vrvb" [d12dc531-720c-4a7a-93af-69b9005666fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:27.629852 1128964 system_pods.go:61] "storage-provisioner" [856896cd-daec-4873-8f9c-c7cadeb3c16e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:27.629857 1128964 system_pods.go:74] duration metric: took 10.000416ms to wait for pod list to return data ...
	I0318 14:21:27.629866 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:27.634112 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:27.634147 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:27.634159 1128964 node_conditions.go:105] duration metric: took 4.287491ms to run NodePressure ...
	I0318 14:21:27.634190 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:27.976277 1128964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980894 1128964 kubeadm.go:733] kubelet initialised
	I0318 14:21:27.980920 1128964 kubeadm.go:734] duration metric: took 4.609836ms waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980932 1128964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:27.986151 1128964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:29.993963 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:31.313401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:33.811753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:36.252999 1128583 start.go:364] duration metric: took 57.905644198s to acquireMachinesLock for "no-preload-188109"
	I0318 14:21:36.253054 1128583 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:36.253063 1128583 fix.go:54] fixHost starting: 
	I0318 14:21:36.253510 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:36.253545 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:36.271856 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0318 14:21:36.272254 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:36.272790 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:21:36.272822 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:36.273237 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:36.273446 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:36.273614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:21:36.275414 1128583 fix.go:112] recreateIfNeeded on no-preload-188109: state=Stopped err=<nil>
	I0318 14:21:36.275440 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	W0318 14:21:36.275623 1128583 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:36.277528 1128583 out.go:177] * Restarting existing kvm2 VM for "no-preload-188109" ...
	I0318 14:21:31.995770 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.495078 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:35.812465 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:37.820263 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.313683 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.278973 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Start
	I0318 14:21:36.279160 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring networks are active...
	I0318 14:21:36.280043 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network default is active
	I0318 14:21:36.280495 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network mk-no-preload-188109 is active
	I0318 14:21:36.281014 1128583 main.go:141] libmachine: (no-preload-188109) Getting domain xml...
	I0318 14:21:36.281995 1128583 main.go:141] libmachine: (no-preload-188109) Creating domain...
	I0318 14:21:37.644409 1128583 main.go:141] libmachine: (no-preload-188109) Waiting to get IP...
	I0318 14:21:37.645406 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.645958 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.646047 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.645922 1130597 retry.go:31] will retry after 223.965782ms: waiting for machine to come up
	I0318 14:21:37.871397 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.871933 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.871971 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.871882 1130597 retry.go:31] will retry after 272.743353ms: waiting for machine to come up
	I0318 14:21:38.146680 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.147278 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.147309 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.147211 1130597 retry.go:31] will retry after 414.468616ms: waiting for machine to come up
	I0318 14:21:38.563199 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.563768 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.563794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.563718 1130597 retry.go:31] will retry after 582.588791ms: waiting for machine to come up
	I0318 14:21:39.147611 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.148410 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.148436 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.148315 1130597 retry.go:31] will retry after 686.425224ms: waiting for machine to come up
	I0318 14:21:39.836964 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.837647 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.837677 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.837593 1130597 retry.go:31] will retry after 878.564369ms: waiting for machine to come up
	I0318 14:21:40.717644 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:40.718346 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:40.718380 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:40.718276 1130597 retry.go:31] will retry after 1.183201382s: waiting for machine to come up
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:36.995480 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:39.005382 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.495300 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.495345 1128964 pod_ready.go:81] duration metric: took 12.509166884s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.495358 1128964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504432 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.504467 1128964 pod_ready.go:81] duration metric: took 9.100778ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504480 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515466 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.515506 1128964 pod_ready.go:81] duration metric: took 11.017212ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515519 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525891 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.525929 1128964 pod_ready.go:81] duration metric: took 10.399892ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525943 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534161 1128964 pod_ready.go:92] pod "kube-proxy-wbnvd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.534196 1128964 pod_ready.go:81] duration metric: took 8.245545ms for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534208 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:42.314504 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:44.812532 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:41.902972 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:41.903707 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:41.903736 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:41.903670 1130597 retry.go:31] will retry after 1.282612289s: waiting for machine to come up
	I0318 14:21:43.188745 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:43.189303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:43.189332 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:43.189257 1130597 retry.go:31] will retry after 1.175485401s: waiting for machine to come up
	I0318 14:21:44.366602 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:44.367162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:44.367191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:44.367121 1130597 retry.go:31] will retry after 1.700678954s: waiting for machine to come up
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:41.690971 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:41.691009 1128964 pod_ready.go:81] duration metric: took 1.156786821s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:41.691020 1128964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:44.189110 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.201644 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.813954 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:48.817402 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.069196 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:46.069747 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:46.069797 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:46.069687 1130597 retry.go:31] will retry after 2.354521412s: waiting for machine to come up
	I0318 14:21:48.425714 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:48.426186 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:48.426219 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:48.426147 1130597 retry.go:31] will retry after 2.74319235s: waiting for machine to come up
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.699275 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:50.699690 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.311999 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:53.811066 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.173264 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.173844 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:51.173880 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:51.173784 1130597 retry.go:31] will retry after 4.489599719s: waiting for machine to come up
	I0318 14:21:55.665080 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665639 1128583 main.go:141] libmachine: (no-preload-188109) Found IP for machine: 192.168.61.40
	I0318 14:21:55.665675 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has current primary IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665686 1128583 main.go:141] libmachine: (no-preload-188109) Reserving static IP address...
	I0318 14:21:55.666111 1128583 main.go:141] libmachine: (no-preload-188109) Reserved static IP address: 192.168.61.40
	I0318 14:21:55.666149 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.666164 1128583 main.go:141] libmachine: (no-preload-188109) Waiting for SSH to be available...
	I0318 14:21:55.666191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | skip adding static IP to network mk-no-preload-188109 - found existing host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"}
	I0318 14:21:55.666205 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Getting to WaitForSSH function...
	I0318 14:21:55.668473 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668792 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.668837 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668947 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH client type: external
	I0318 14:21:55.668989 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa (-rw-------)
	I0318 14:21:55.669020 1128583 main.go:141] libmachine: (no-preload-188109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:55.669043 1128583 main.go:141] libmachine: (no-preload-188109) DBG | About to run SSH command:
	I0318 14:21:55.669095 1128583 main.go:141] libmachine: (no-preload-188109) DBG | exit 0
	I0318 14:21:55.796228 1128583 main.go:141] libmachine: (no-preload-188109) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:55.796668 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetConfigRaw
	I0318 14:21:55.797378 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:55.800241 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.800716 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.800771 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.801150 1128583 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:21:55.801416 1128583 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:55.801441 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:55.801690 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.804667 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.700698 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.198658 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.805029 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.805269 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.806759 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.806983 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807220 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807421 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.807623 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.807952 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.807982 1128583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:55.920939 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:55.920993 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921259 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:21:55.921292 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921510 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.924430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.924921 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.924962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.925153 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.925431 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925792 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.926029 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.926301 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.926320 1128583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-188109 && echo "no-preload-188109" | sudo tee /etc/hostname
	I0318 14:21:56.051873 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-188109
	
	I0318 14:21:56.051915 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.055015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055387 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.055422 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055659 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.055887 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056058 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056190 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.056318 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.056508 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.056525 1128583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188109/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:56.178366 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:56.178401 1128583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:56.178443 1128583 buildroot.go:174] setting up certificates
	I0318 14:21:56.178454 1128583 provision.go:84] configureAuth start
	I0318 14:21:56.178465 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:56.178859 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:56.181995 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.182457 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182724 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.185337 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185623 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.185649 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185880 1128583 provision.go:143] copyHostCerts
	I0318 14:21:56.185968 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:56.185983 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:56.186073 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:56.186249 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:56.186264 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:56.186296 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:56.186392 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:56.186406 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:56.186432 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:56.186511 1128583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.no-preload-188109 san=[127.0.0.1 192.168.61.40 localhost minikube no-preload-188109]
	I0318 14:21:56.332196 1128583 provision.go:177] copyRemoteCerts
	I0318 14:21:56.332267 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:56.332295 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.335310 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335604 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.335639 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335787 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.336002 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.336170 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.336310 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.427529 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:56.459132 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:21:56.488690 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:56.516043 1128583 provision.go:87] duration metric: took 337.568576ms to configureAuth
	I0318 14:21:56.516088 1128583 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:56.516309 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:21:56.516457 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.519576 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.519998 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.520059 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.520237 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.520460 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520677 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520876 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.521065 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.521290 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.521307 1128583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:56.831034 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:56.831076 1128583 machine.go:97] duration metric: took 1.029643209s to provisionDockerMachine
	I0318 14:21:56.831092 1128583 start.go:293] postStartSetup for "no-preload-188109" (driver="kvm2")
	I0318 14:21:56.831107 1128583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:56.831126 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:56.831549 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:56.831611 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.834520 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.834962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.834992 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.835234 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.835415 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.835582 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.835743 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.927694 1128583 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:56.932973 1128583 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:56.933002 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:56.933088 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:56.933200 1128583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:56.933345 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:56.943594 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:56.971483 1128583 start.go:296] duration metric: took 140.368525ms for postStartSetup
	I0318 14:21:56.971564 1128583 fix.go:56] duration metric: took 20.718501273s for fixHost
	I0318 14:21:56.971618 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.974721 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975185 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.975250 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975409 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.975679 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.975885 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.976049 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.976242 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.976438 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.976453 1128583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:57.089795 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771717.066528661
	
	I0318 14:21:57.089823 1128583 fix.go:216] guest clock: 1710771717.066528661
	I0318 14:21:57.089834 1128583 fix.go:229] Guest: 2024-03-18 14:21:57.066528661 +0000 UTC Remote: 2024-03-18 14:21:56.971568576 +0000 UTC m=+361.214853207 (delta=94.960085ms)
	I0318 14:21:57.089865 1128583 fix.go:200] guest clock delta is within tolerance: 94.960085ms
	I0318 14:21:57.089873 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 20.836840869s
	I0318 14:21:57.089898 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.090297 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:57.094015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094517 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.094563 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094920 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095607 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095844 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095978 1128583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:57.096034 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.096182 1128583 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:57.096221 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.099303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099329 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099754 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099854 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099869 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.100103 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100118 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100339 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100568 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100578 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100766 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.100781 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.203060 1128583 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:57.209943 1128583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:57.368686 1128583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:57.376289 1128583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:57.376375 1128583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:57.394365 1128583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:57.394405 1128583 start.go:494] detecting cgroup driver to use...
	I0318 14:21:57.394488 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:57.412172 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:57.428895 1128583 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:57.428988 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:57.445064 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:57.461255 1128583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:57.596381 1128583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:57.774782 1128583 docker.go:233] disabling docker service ...
	I0318 14:21:57.774890 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:57.791820 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:57.807412 1128583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:57.961890 1128583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:58.118122 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:58.133994 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:58.155336 1128583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:58.155429 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.167537 1128583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:58.167642 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.180814 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.193997 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.206817 1128583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:58.220843 1128583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:58.232012 1128583 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:58.232073 1128583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:58.246610 1128583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:58.260393 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:58.416723 1128583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:58.588776 1128583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:58.588864 1128583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:58.594689 1128583 start.go:562] Will wait 60s for crictl version
	I0318 14:21:58.594787 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:58.599287 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:58.634954 1128583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:58.635059 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.667031 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.703316 1128583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:21:55.812079 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:57.813027 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.310988 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:58.704763 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:58.708030 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708495 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:58.708527 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708738 1128583 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:58.713408 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:58.726934 1128583 kubeadm.go:877] updating cluster {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:58.727067 1128583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:21:58.727105 1128583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:58.764875 1128583 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:21:58.764904 1128583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:58.764976 1128583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.765019 1128583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.765091 1128583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.765117 1128583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.765142 1128583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.765158 1128583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.765125 1128583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.765098 1128583 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766495 1128583 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766589 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.766592 1128583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.766768 1128583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.766924 1128583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.766492 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.919274 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 14:21:58.934955 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.945887 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.954907 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.961334 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.976485 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.991515 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.100572 1128583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 14:21:59.100624 1128583 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.100684 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.125681 1128583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 14:21:59.125740 1128583 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.125799 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.138461 1128583 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 14:21:59.138521 1128583 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.138579 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149655 1128583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 14:21:59.149697 1128583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.149763 1128583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149803 1128583 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.149831 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.149839 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149790 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149875 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.231815 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.231851 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 14:21:59.231959 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:21:59.232052 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.232060 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.232064 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.232148 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.317997 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 14:21:59.318029 1128583 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318083 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318116 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318158 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318213 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318240 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.318246 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 14:21:59.318252 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318281 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 14:21:59.318315 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.364549 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.703771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.200048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:02.313802 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.812944 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:03.246360 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.928017963s)
	I0318 14:22:03.246414 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246364 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.928251379s)
	I0318 14:22:03.246429 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 14:22:03.246439 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.92820974s)
	I0318 14:22:03.246454 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246468 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246415 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.928141711s)
	I0318 14:22:03.246512 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246515 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246516 1128583 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.88192635s)
	I0318 14:22:03.246587 1128583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 14:22:03.246641 1128583 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:03.246704 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.203222 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.699887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.813730 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.311491 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.317600 1128583 ssh_runner.go:235] Completed: which crictl: (3.070863461s)
	I0318 14:22:06.317700 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:06.317775 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.071235517s)
	I0318 14:22:06.317805 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 14:22:06.317837 1128583 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.317907 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.370328 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 14:22:06.370435 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.243401402s)
	I0318 14:22:08.613903 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.295918452s)
	I0318 14:22:08.613917 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 14:22:08.613941 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:08.613994 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.199978 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.200394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.312752 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:13.812826 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.076840 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462814214s)
	I0318 14:22:11.076881 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 14:22:11.076917 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:11.076968 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:13.332851 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.25584847s)
	I0318 14:22:13.332896 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 14:22:13.332932 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:13.333002 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:14.705785 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.372744893s)
	I0318 14:22:14.705843 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 14:22:14.705881 1128583 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:14.705945 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:15.467380 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 14:22:15.467432 1128583 cache_images.go:123] Successfully loaded all cached images
	I0318 14:22:15.467439 1128583 cache_images.go:92] duration metric: took 16.702522125s to LoadCachedImages
	I0318 14:22:15.467456 1128583 kubeadm.go:928] updating node { 192.168.61.40 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:22:15.467619 1128583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-188109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:22:15.467790 1128583 ssh_runner.go:195] Run: crio config
	I0318 14:22:15.520678 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:15.520705 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:15.520718 1128583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:22:15.520740 1128583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.40 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188109 NodeName:no-preload-188109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:22:15.520893 1128583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188109"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:22:15.520965 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:22:15.534187 1128583 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:22:15.534260 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:22:15.546509 1128583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 14:22:15.567029 1128583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:22:15.586866 1128583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 14:22:15.609161 1128583 ssh_runner.go:195] Run: grep 192.168.61.40	control-plane.minikube.internal$ /etc/hosts
	I0318 14:22:15.614800 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:22:15.630088 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:22:15.754729 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:22:15.774062 1128583 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109 for IP: 192.168.61.40
	I0318 14:22:15.774093 1128583 certs.go:194] generating shared ca certs ...
	I0318 14:22:15.774114 1128583 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:22:15.774374 1128583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:22:15.774434 1128583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:22:15.774448 1128583 certs.go:256] generating profile certs ...
	I0318 14:22:15.774537 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/client.key
	I0318 14:22:15.774607 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key.8d4024a9
	I0318 14:22:15.774652 1128583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key
	I0318 14:22:15.774833 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:22:15.774871 1128583 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:22:15.774882 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:22:15.774926 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:22:15.774972 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:22:15.775031 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:22:15.775106 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:22:15.775902 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.698561 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:14.199200 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.200026 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.312392 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:18.812463 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:15.821418 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:22:15.874044 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:22:15.910814 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:22:15.965889 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:22:16.001003 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:22:16.030033 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:22:16.060519 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:22:16.089952 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:22:16.119397 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:22:16.150036 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:22:16.179489 1128583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:22:16.201823 1128583 ssh_runner.go:195] Run: openssl version
	I0318 14:22:16.208496 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:22:16.222723 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228161 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228239 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.234994 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:22:16.248672 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:22:16.262626 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268255 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268361 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.274868 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:22:16.287251 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:22:16.299690 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304633 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304718 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.311230 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:22:16.325483 1128583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:22:16.331012 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:22:16.338731 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:22:16.346289 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:22:16.353403 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:22:16.359967 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:22:16.367151 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:22:16.373719 1128583 kubeadm.go:391] StartCluster: {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:22:16.373823 1128583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:22:16.373921 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.417874 1128583 cri.go:89] found id: ""
	I0318 14:22:16.417957 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:22:16.431026 1128583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:22:16.431057 1128583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:22:16.431065 1128583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:22:16.431125 1128583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:22:16.445445 1128583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:22:16.446576 1128583 kubeconfig.go:125] found "no-preload-188109" server: "https://192.168.61.40:8443"
	I0318 14:22:16.449104 1128583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:22:16.461001 1128583 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.40
	I0318 14:22:16.461042 1128583 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:22:16.461056 1128583 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:22:16.461104 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.502356 1128583 cri.go:89] found id: ""
	I0318 14:22:16.502437 1128583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:22:16.525636 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:22:16.538600 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:22:16.538626 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:22:16.538677 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:22:16.550720 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:22:16.550803 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:22:16.562585 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:22:16.573439 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:22:16.573502 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:22:16.585548 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.596619 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:22:16.596706 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.608458 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:22:16.619498 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:22:16.619587 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:22:16.631359 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:22:16.643420 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:16.765437 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:17.862932 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097434993s)
	I0318 14:22:17.862980 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.097197 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.168390 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.295118 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:22:18.295225 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.795897 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.295431 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.335088 1128583 api_server.go:72] duration metric: took 1.039967082s to wait for apiserver process to appear ...
	I0318 14:22:19.335128 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:22:19.335163 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:19.335912 1128583 api_server.go:269] stopped: https://192.168.61.40:8443/healthz: Get "https://192.168.61.40:8443/healthz": dial tcp 192.168.61.40:8443: connect: connection refused
	I0318 14:22:19.836266 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.699537 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:21.199910 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:22.338349 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.338383 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.338402 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.351154 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.351190 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.835446 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.841044 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:22.841092 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.335665 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.347092 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.347126 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.835731 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.840517 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.840559 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:24.336151 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:24.340981 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:22:24.354524 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:22:24.354560 1128583 api_server.go:131] duration metric: took 5.019424083s to wait for apiserver health ...
	I0318 14:22:24.354570 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:24.354576 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:24.356602 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:22:20.818751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:23.312003 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:24.358089 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:22:24.375159 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:22:24.426409 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:22:24.452289 1128583 system_pods.go:59] 8 kube-system pods found
	I0318 14:22:24.452326 1128583 system_pods.go:61] "coredns-76f75df574-cksb5" [9cd14e15-7b0f-4978-b667-cba1a54db074] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:22:24.452333 1128583 system_pods.go:61] "etcd-no-preload-188109" [fa7d3ae7-2ac1-4275-8739-686c2e3b7569] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:22:24.452345 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [135ee544-ca83-41ab-9cb2-070587eb3b77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:22:24.452351 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [fd91846b-6210-4cab-ae0f-5e942b4f596e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:22:24.452361 1128583 system_pods.go:61] "kube-proxy-k5kcr" [a1649d3a-9063-49c3-a8a5-04879eee108b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:22:24.452367 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [5bbb4165-ca8f-4807-ad01-bb35c56b6aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:22:24.452375 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-6pn6n" [004af8d8-fa8c-475c-9604-ed98ccceb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:22:24.452390 1128583 system_pods.go:61] "storage-provisioner" [45cae6ca-e3ad-4f7e-9d10-96e091160e4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:22:24.452404 1128583 system_pods.go:74] duration metric: took 25.960889ms to wait for pod list to return data ...
	I0318 14:22:24.452417 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:22:24.456337 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:22:24.456367 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:22:24.456404 1128583 node_conditions.go:105] duration metric: took 3.980296ms to run NodePressure ...
	I0318 14:22:24.456424 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:24.738808 1128583 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743864 1128583 kubeadm.go:733] kubelet initialised
	I0318 14:22:24.743893 1128583 kubeadm.go:734] duration metric: took 5.054661ms waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743905 1128583 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:22:24.749832 1128583 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.700193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.198261 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:25.810553 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:27.811576 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.310813 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.757033 1128583 pod_ready.go:102] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:28.757522 1128583 pod_ready.go:92] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:28.757562 1128583 pod_ready.go:81] duration metric: took 4.007696709s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:28.757576 1128583 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:30.767877 1128583 pod_ready.go:102] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.199688 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.199994 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:32.311356 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.311807 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.265717 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:31.265745 1128583 pod_ready.go:81] duration metric: took 2.508162139s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:31.265755 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:33.273718 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:35.275477 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.200137 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.698589 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:36.812617 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.312289 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:37.774164 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.273935 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.273990 1128583 pod_ready.go:81] duration metric: took 8.008204942s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.274005 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280284 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.280313 1128583 pod_ready.go:81] duration metric: took 6.300519ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280324 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286027 1128583 pod_ready.go:92] pod "kube-proxy-k5kcr" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.286052 1128583 pod_ready.go:81] duration metric: took 5.721757ms for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286061 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292404 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.292450 1128583 pod_ready.go:81] duration metric: took 6.381121ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292462 1128583 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.699760 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.198691 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.199259 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.812494 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:44.312890 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.300167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:43.803022 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.199771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:45.698884 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.811124 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.827580 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.300768 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.300891 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.800442 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:47.699312 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.199049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:51.312163 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.812711 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:52.800573 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:54.801034 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:52.698863 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:55.200175 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.311249 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:58.312362 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:57.300207 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.300788 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:57.698375 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.706517 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:00.814865 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:03.312810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:01.800156 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.302577 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:02.198461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.199002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.199307 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:05.811934 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:08.312176 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:10.312721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.800938 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:09.302833 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:08.699862 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.199072 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.315288 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.813053 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.800023 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.300632 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:13.199539 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:15.700968 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:17.311266 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:19.311540 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:16.800323 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.801497 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:18.198887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:20.698596 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.312215 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.810513 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.299039 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.300143 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.798699 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.699705 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.200662 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.811810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.813013 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.311457 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.800554 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.304272 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:27.700002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.200376 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.311860 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.313084 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.802042 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:35.299530 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:32.212811 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.700061 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:36.811811 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.312768 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.300434 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.300586 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.199672 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.698826 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.810759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.812721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.800736 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:41.707557 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.199509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.311703 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.811077 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.301804 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.800280 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.801802 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.698786 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:49.198083 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:51.198509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.812029 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.311681 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.300917 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.301653 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.699341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.699481 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.810495 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.810917 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:59.812265 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.800712 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.300089 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:58.198656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.699394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.311601 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.311669 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.300584 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.300628 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:03.198564 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:05.198935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.811819 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.812462 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.300681 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.300912 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.799297 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:07.699433 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.198062 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.311072 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:13.311533 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:15.313064 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:12.799619 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.800637 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:12.201234 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.698988 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.811663 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.812068 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:16.801294 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.301959 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.199048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.200040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:22.312401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.812678 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.799684 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.301211 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.699674 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.199382 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.312672 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.315087 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:26.800594 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.299763 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:26.698769 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:28.700212 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.200571 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.811482 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.812980 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.299818 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.300656 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.798808 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:33.698578 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.698656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.814759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:38.311080 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:40.312503 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:37.800024 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.801292 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.698736 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.699014 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.813028 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.814259 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.300121 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.802581 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:42.198235 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.199088 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.312598 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.814542 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.300765 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.801469 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.699746 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.198789 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:51.199313 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.311753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.311952 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.301766 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.199935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:55.699461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.811981 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.814200 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.799464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.800623 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.198049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:00.199613 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.312698 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.811687 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.299462 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.300239 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:05.301621 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.200341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:04.698040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:06.312239 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.811427 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:07.800597 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:10.300964 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:06.700315 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:09.198241 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.198296 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.312458 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:13.312602 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:12.799474 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:14.800216 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:13.199314 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.199666 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.812120 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.813219 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.814195 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:16.800432 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.299589 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.698783 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.698980 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.804733 1128788 pod_ready.go:81] duration metric: took 4m0.000568505s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:21.804764 1128788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:21.804783 1128788 pod_ready.go:38] duration metric: took 4m13.068724908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:21.804834 1128788 kubeadm.go:591] duration metric: took 4m21.284795634s to restartPrimaryControlPlane
	W0318 14:25:21.804919 1128788 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:21.804954 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:21.300889 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:23.800547 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:25.803188 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:21.699436 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:24.199193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:28.300495 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:30.300870 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:26.700384 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:29.203012 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:32.800506 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:35.299464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:31.698372 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.699450 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.199443 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:37.299695 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:39.800741 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:38.199879 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:40.698620 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:41.801098 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:44.301379 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:41.692077 1128964 pod_ready.go:81] duration metric: took 4m0.00104s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:41.692109 1128964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:41.692136 1128964 pod_ready.go:38] duration metric: took 4m13.711186182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:41.692170 1128964 kubeadm.go:591] duration metric: took 4m21.341445822s to restartPrimaryControlPlane
	W0318 14:25:41.692279 1128964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:41.692345 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:46.800687 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:49.300012 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:54.173048 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.368065771s)
	I0318 14:25:54.173145 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:54.192139 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:54.204909 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:54.217096 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:54.217126 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:54.217182 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:54.227905 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:54.228009 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:54.239854 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:54.250668 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:54.250744 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:54.263509 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.274202 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:54.274265 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.285342 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:54.296064 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:54.296157 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:54.307985 1128788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:54.371118 1128788 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:25:54.371202 1128788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:54.551187 1128788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:54.551377 1128788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:54.551551 1128788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:54.780034 1128788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:54.782426 1128788 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:54.782545 1128788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:54.782650 1128788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:54.782735 1128788 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:54.782829 1128788 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:54.782930 1128788 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:54.783213 1128788 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:54.783717 1128788 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:54.784390 1128788 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:54.784849 1128788 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:54.785263 1128788 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:54.785725 1128788 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:54.785826 1128788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:55.130998 1128788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:55.387076 1128788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:55.517240 1128788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:51.300209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:53.303010 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.800703 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.906565 1128788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:55.907198 1128788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:55.909674 1128788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:55.912196 1128788 out.go:204]   - Booting up control plane ...
	I0318 14:25:55.912323 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:55.912407 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:55.912494 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:55.932596 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:55.935171 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:55.935520 1128788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:56.083395 1128788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:58.300288 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:00.800291 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:02.086878 1128788 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002842 seconds
	I0318 14:26:02.087052 1128788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:02.102499 1128788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:02.637889 1128788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:02.638152 1128788 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-767719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:03.157386 1128788 kubeadm.go:309] [bootstrap-token] Using token: do2whq.efhsaljmpmqgv9gj
	I0318 14:26:03.159248 1128788 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:03.159429 1128788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:03.167328 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:03.180628 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:03.185253 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:03.190014 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:03.202714 1128788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:03.223282 1128788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:03.504303 1128788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:03.614837 1128788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:03.614872 1128788 kubeadm.go:309] 
	I0318 14:26:03.614978 1128788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:03.615004 1128788 kubeadm.go:309] 
	I0318 14:26:03.615107 1128788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:03.615117 1128788 kubeadm.go:309] 
	I0318 14:26:03.615149 1128788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:03.615219 1128788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:03.615285 1128788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:03.615293 1128788 kubeadm.go:309] 
	I0318 14:26:03.615354 1128788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:03.615365 1128788 kubeadm.go:309] 
	I0318 14:26:03.615421 1128788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:03.615430 1128788 kubeadm.go:309] 
	I0318 14:26:03.615486 1128788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:03.615578 1128788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:03.615669 1128788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:03.615679 1128788 kubeadm.go:309] 
	I0318 14:26:03.615778 1128788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:03.615887 1128788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:03.615897 1128788 kubeadm.go:309] 
	I0318 14:26:03.615998 1128788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616120 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:03.616149 1128788 kubeadm.go:309] 	--control-plane 
	I0318 14:26:03.616159 1128788 kubeadm.go:309] 
	I0318 14:26:03.616266 1128788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:03.616276 1128788 kubeadm.go:309] 
	I0318 14:26:03.616371 1128788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616500 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:03.617330 1128788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:03.617374 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:26:03.617384 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:03.619394 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:03.620836 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:03.665582 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:03.812834 1128788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:03.812897 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:03.812943 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-767719 minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=embed-certs-767719 minikube.k8s.io/primary=true
	I0318 14:26:03.899419 1128788 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:04.104407 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:04.604499 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.104532 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.605047 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:02.800707 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:04.802167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:06.105187 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:06.604462 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.104411 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.605096 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.104448 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.604430 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.104707 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.605130 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.104955 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.605165 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.300575 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:09.798776 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:11.104436 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.605273 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.104851 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.604819 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.104669 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.605089 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.105486 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.604568 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.104455 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.604422 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.799935 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:13.800907 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:15.801754 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:16.105107 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:16.205506 1128788 kubeadm.go:1107] duration metric: took 12.39266353s to wait for elevateKubeSystemPrivileges
	W0318 14:26:16.205558 1128788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:16.205570 1128788 kubeadm.go:393] duration metric: took 5m15.738081871s to StartCluster
	I0318 14:26:16.205599 1128788 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.205720 1128788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:16.208645 1128788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.209157 1128788 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:16.210915 1128788 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:16.209206 1128788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:16.209401 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:16.212258 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:16.212275 1128788 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767719"
	I0318 14:26:16.212351 1128788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767719"
	I0318 14:26:16.212260 1128788 addons.go:69] Setting metrics-server=true in profile "embed-certs-767719"
	I0318 14:26:16.212415 1128788 addons.go:234] Setting addon metrics-server=true in "embed-certs-767719"
	W0318 14:26:16.212431 1128788 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:16.212469 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212260 1128788 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767719"
	I0318 14:26:16.212512 1128788 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767719"
	W0318 14:26:16.212527 1128788 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:16.212560 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.212983 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213003 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213028 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.213040 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.231532 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0318 14:26:16.231543 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0318 14:26:16.232128 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0318 14:26:16.232280 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232284 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232882 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.232907 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.232922 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.233258 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233284 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.233360 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.233479 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233501 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.235956 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236151 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236372 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236411 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.236545 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236568 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.240163 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.244336 1128788 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767719"
	W0318 14:26:16.244370 1128788 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:16.244407 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.244845 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.244894 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.257940 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0318 14:26:16.258701 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.259359 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.259386 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.259769 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.260030 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.262272 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.262286 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0318 14:26:16.264459 1128788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:16.262834 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.265430 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0318 14:26:16.266198 1128788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.266220 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:16.266240 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.266482 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.266663 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.266676 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267253 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.267277 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267753 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.268456 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.268605 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.269068 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.269098 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.269804 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270398 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.270420 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270711 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.270989 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.271183 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.271362 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.271984 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.273854 1128788 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:14.305258 1128964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.612890386s)
	I0318 14:26:14.305324 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:14.325572 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:26:14.337875 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:26:14.350490 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:26:14.350530 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:26:14.350592 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:26:14.361521 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:26:14.361612 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:26:14.372767 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:26:14.383545 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:26:14.383614 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:26:14.394057 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.404187 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:26:14.404261 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.415029 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:26:14.425738 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:26:14.425820 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:26:14.436847 1128964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:26:14.674909 1128964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:16.275278 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:16.275298 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:16.275323 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.278500 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.278909 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.278939 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.279230 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.279437 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.279612 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.279748 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.286716 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0318 14:26:16.287176 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.287651 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.287678 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.288057 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.288248 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.290084 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.290359 1128788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.290381 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:16.290404 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.293253 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293662 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.293688 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293886 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.294078 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.294241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.294398 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.460832 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:16.537089 1128788 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550362 1128788 node_ready.go:49] node "embed-certs-767719" has status "Ready":"True"
	I0318 14:26:16.550391 1128788 node_ready.go:38] duration metric: took 13.195546ms for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550405 1128788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:16.557745 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:16.638531 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:16.638565 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:16.664638 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.762661 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:16.762713 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:16.792712 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.859169 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:16.859200 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:16.954827 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:18.103559 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.103592 1128788 pod_ready.go:81] duration metric: took 1.545818643s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.103606 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.256039 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591350359s)
	I0318 14:26:18.256112 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256129 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256483 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256513 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256530 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256528 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.256541 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256918 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256936 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256950 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.264761 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.264788 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.265133 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.265164 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.265193 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.652953 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.653088 1128788 pod_ready.go:81] duration metric: took 549.466665ms for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.653124 1128788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674506 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.674553 1128788 pod_ready.go:81] duration metric: took 21.386005ms for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674568 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.680422 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.887663901s)
	I0318 14:26:18.680486 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680498 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.680875 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.680887 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.680903 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.680921 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680928 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.681198 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.681199 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.681277 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.711919 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.711954 1128788 pod_ready.go:81] duration metric: took 37.376915ms for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.711968 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730096 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.730129 1128788 pod_ready.go:81] duration metric: took 18.151839ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730145 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.756000 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.801120989s)
	I0318 14:26:18.756076 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756091 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756416 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756435 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756445 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756452 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756849 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756883 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756895 1128788 addons.go:470] Verifying addon metrics-server=true in "embed-certs-767719"
	I0318 14:26:18.756917 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.759019 1128788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 14:26:18.760442 1128788 addons.go:505] duration metric: took 2.551236037s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 14:26:18.942164 1128788 pod_ready.go:92] pod "kube-proxy-f4547" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.942196 1128788 pod_ready.go:81] duration metric: took 212.040337ms for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.942205 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341772 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:19.341808 1128788 pod_ready.go:81] duration metric: took 399.594033ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341820 1128788 pod_ready.go:38] duration metric: took 2.791403027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:19.341841 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:19.341921 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:19.362110 1128788 api_server.go:72] duration metric: took 3.152894755s to wait for apiserver process to appear ...
	I0318 14:26:19.362150 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:19.362209 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:26:19.368138 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:26:19.369583 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:19.369608 1128788 api_server.go:131] duration metric: took 7.450993ms to wait for apiserver health ...
	I0318 14:26:19.369617 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:19.545388 1128788 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:19.545423 1128788 system_pods.go:61] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.545428 1128788 system_pods.go:61] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.545431 1128788 system_pods.go:61] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.545434 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.545438 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.545441 1128788 system_pods.go:61] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.545443 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.545449 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.545455 1128788 system_pods.go:61] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.545464 1128788 system_pods.go:74] duration metric: took 175.840386ms to wait for pod list to return data ...
	I0318 14:26:19.545473 1128788 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:19.741364 1128788 default_sa.go:45] found service account: "default"
	I0318 14:26:19.741405 1128788 default_sa.go:55] duration metric: took 195.920075ms for default service account to be created ...
	I0318 14:26:19.741424 1128788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:19.945000 1128788 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:19.945039 1128788 system_pods.go:89] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.945047 1128788 system_pods.go:89] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.945053 1128788 system_pods.go:89] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.945060 1128788 system_pods.go:89] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.945066 1128788 system_pods.go:89] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.945070 1128788 system_pods.go:89] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.945076 1128788 system_pods.go:89] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.945087 1128788 system_pods.go:89] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.945097 1128788 system_pods.go:89] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.945110 1128788 system_pods.go:126] duration metric: took 203.67742ms to wait for k8s-apps to be running ...
	I0318 14:26:19.945122 1128788 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:19.945188 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:19.987286 1128788 system_svc.go:56] duration metric: took 42.149434ms WaitForService to wait for kubelet
	I0318 14:26:19.987328 1128788 kubeadm.go:576] duration metric: took 3.778120092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:19.987361 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:20.141763 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:20.141803 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:20.141822 1128788 node_conditions.go:105] duration metric: took 154.45408ms to run NodePressure ...
	I0318 14:26:20.141840 1128788 start.go:240] waiting for startup goroutines ...
	I0318 14:26:20.141851 1128788 start.go:245] waiting for cluster config update ...
	I0318 14:26:20.141867 1128788 start.go:254] writing updated cluster config ...
	I0318 14:26:20.142268 1128788 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:20.206832 1128788 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:20.209057 1128788 out.go:177] * Done! kubectl is now configured to use "embed-certs-767719" cluster and "default" namespace by default
	I0318 14:26:18.302228 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:20.799704 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.444912 1128964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:26:23.444993 1128964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:26:23.445098 1128964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:26:23.445212 1128964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:26:23.445359 1128964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:26:23.445461 1128964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:26:23.446790 1128964 out.go:204]   - Generating certificates and keys ...
	I0318 14:26:23.446904 1128964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:26:23.446986 1128964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:26:23.447102 1128964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:26:23.447194 1128964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:26:23.447309 1128964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:26:23.447376 1128964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:26:23.447453 1128964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:26:23.447529 1128964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:26:23.447607 1128964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:26:23.447693 1128964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:26:23.447741 1128964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:26:23.447856 1128964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:26:23.447937 1128964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:26:23.448019 1128964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:26:23.448121 1128964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:26:23.448194 1128964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:26:23.448311 1128964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:26:23.448422 1128964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:26:23.450038 1128964 out.go:204]   - Booting up control plane ...
	I0318 14:26:23.450174 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:26:23.450282 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:26:23.450371 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:26:23.450509 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:26:23.450633 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:26:23.450671 1128964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:26:23.450818 1128964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:23.450887 1128964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.005932 seconds
	I0318 14:26:23.450974 1128964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:23.451093 1128964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:23.451143 1128964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:23.451340 1128964 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-075922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:23.451414 1128964 kubeadm.go:309] [bootstrap-token] Using token: k51w96.h8xduusjdfbez3gf
	I0318 14:26:23.452848 1128964 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:23.452964 1128964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:23.453073 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:23.453269 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:23.453499 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:23.453664 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:23.453785 1128964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:23.453940 1128964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:23.454005 1128964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:23.454074 1128964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:23.454084 1128964 kubeadm.go:309] 
	I0318 14:26:23.454172 1128964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:23.454186 1128964 kubeadm.go:309] 
	I0318 14:26:23.454288 1128964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:23.454298 1128964 kubeadm.go:309] 
	I0318 14:26:23.454335 1128964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:23.454412 1128964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:23.454475 1128964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:23.454484 1128964 kubeadm.go:309] 
	I0318 14:26:23.454528 1128964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:23.454538 1128964 kubeadm.go:309] 
	I0318 14:26:23.454592 1128964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:23.454599 1128964 kubeadm.go:309] 
	I0318 14:26:23.454681 1128964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:23.454804 1128964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:23.454907 1128964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:23.454919 1128964 kubeadm.go:309] 
	I0318 14:26:23.455027 1128964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:23.455146 1128964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:23.455157 1128964 kubeadm.go:309] 
	I0318 14:26:23.455264 1128964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455401 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:23.455433 1128964 kubeadm.go:309] 	--control-plane 
	I0318 14:26:23.455441 1128964 kubeadm.go:309] 
	I0318 14:26:23.455551 1128964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:23.455560 1128964 kubeadm.go:309] 
	I0318 14:26:23.455666 1128964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455814 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:23.455838 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:26:23.455849 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:23.457678 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:22.801209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:25.305096 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.459285 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:23.475803 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:23.515652 1128964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-075922 minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=default-k8s-diff-port-075922 minikube.k8s.io/primary=true
	I0318 14:26:23.796828 1128964 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:23.796947 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.296970 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.797728 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.297564 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.797144 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:26.297056 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.800960 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:29.802967 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:26.798004 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.297935 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.797550 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.297031 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.797624 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.297549 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.797256 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.297964 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.797927 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:31.297742 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.300787 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:34.800941 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:31.797040 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.297155 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.797371 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.297809 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.797723 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.297045 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.797008 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.297030 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.797767 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.895914 1128964 kubeadm.go:1107] duration metric: took 12.380212538s to wait for elevateKubeSystemPrivileges
	W0318 14:26:35.895975 1128964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:35.895987 1128964 kubeadm.go:393] duration metric: took 5m15.606276512s to StartCluster
	I0318 14:26:35.896013 1128964 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.896123 1128964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:35.898023 1128964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.898324 1128964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:35.900235 1128964 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:35.898415 1128964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:35.898550 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:35.901588 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:35.901599 1128964 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901617 1128964 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901640 1128964 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901650 1128964 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:35.901665 1128964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-075922"
	I0318 14:26:35.901588 1128964 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901698 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.901723 1128964 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901735 1128964 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:35.901764 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.902055 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902088 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902097 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902126 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902130 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902169 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.919538 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0318 14:26:35.920140 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.920836 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.920864 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.921282 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.921940 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.921983 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.923313 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0318 14:26:35.923321 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0318 14:26:35.923742 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.923792 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.924263 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924280 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924381 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924395 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924710 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924733 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924893 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.925215 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.925235 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.928021 1128964 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.928047 1128964 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:35.928081 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.928422 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.928449 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.941908 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0318 14:26:35.942465 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.943114 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.943146 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.943757 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.943991 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.944493 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 14:26:35.944874 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.945387 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.945404 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.945865 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.945988 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.948302 1128964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:35.946821 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.947744 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 14:26:35.950087 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:35.950110 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:35.950135 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.950181 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.950664 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.951258 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.951295 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.951755 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.952146 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.953842 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954331 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.954353 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954360 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.954563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.956253 1128964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:35.954739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:35.956487 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.957743 1128964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:35.957764 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:35.957783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.957864 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.960451 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.960896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.960929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.961107 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.961281 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.961435 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.961565 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.968795 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0318 14:26:35.969191 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.969631 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.969646 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.969955 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.970117 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.971799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.972169 1128964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:35.972188 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:35.972206 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.974906 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975268 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.975301 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975551 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.975767 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.975958 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.976137 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:36.122420 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:36.139655 1128964 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160857 1128964 node_ready.go:49] node "default-k8s-diff-port-075922" has status "Ready":"True"
	I0318 14:26:36.160883 1128964 node_ready.go:38] duration metric: took 21.193343ms for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160893 1128964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:36.176832 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:36.240357 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:36.240385 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:36.261620 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:36.279644 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:36.294510 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:36.294546 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:36.374231 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:36.376166 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:36.419045 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:38.032072 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752379015s)
	I0318 14:26:38.032148 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032161 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032374 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770714521s)
	I0318 14:26:38.032416 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032427 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032623 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032652 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032660 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032683 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032796 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032814 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032817 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032835 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032848 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.033046 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033107 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033173 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.033149 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033259 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033284 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.112866 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.112896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.113337 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.113362 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.113384 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176199 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.757085355s)
	I0318 14:26:38.176281 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176302 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176669 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176683 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176697 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176707 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176716 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176955 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176969 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176980 1128964 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-075922"
	I0318 14:26:38.178714 1128964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:26:37.300219 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:39.293136 1128583 pod_ready.go:81] duration metric: took 4m0.000606722s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	E0318 14:26:39.293173 1128583 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:26:39.293203 1128583 pod_ready.go:38] duration metric: took 4m14.549283732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:39.293239 1128583 kubeadm.go:591] duration metric: took 4m22.862167815s to restartPrimaryControlPlane
	W0318 14:26:39.293320 1128583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:26:39.293362 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:38.180451 1128964 addons.go:505] duration metric: took 2.282033093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:26:38.194239 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:40.186091 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.186125 1128964 pod_ready.go:81] duration metric: took 4.009253844s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.186139 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193026 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.193059 1128964 pod_ready.go:81] duration metric: took 6.912513ms for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193069 1128964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199244 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.199272 1128964 pod_ready.go:81] duration metric: took 6.195834ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199283 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.204991 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.205019 1128964 pod_ready.go:81] duration metric: took 5.728459ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.205034 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214706 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.214730 1128964 pod_ready.go:81] duration metric: took 9.687528ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214739 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.581970 1128964 pod_ready.go:92] pod "kube-proxy-bzwvf" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.582045 1128964 pod_ready.go:81] duration metric: took 367.297496ms for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.582059 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981562 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.981592 1128964 pod_ready.go:81] duration metric: took 399.525488ms for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981601 1128964 pod_ready.go:38] duration metric: took 4.820697544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:40.981618 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:40.981676 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:40.998626 1128964 api_server.go:72] duration metric: took 5.100242538s to wait for apiserver process to appear ...
	I0318 14:26:40.998672 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:40.998703 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:26:41.010986 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:26:41.012714 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:41.012742 1128964 api_server.go:131] duration metric: took 14.061953ms to wait for apiserver health ...
	I0318 14:26:41.012750 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:41.186873 1128964 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:41.186910 1128964 system_pods.go:61] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.186917 1128964 system_pods.go:61] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.186922 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.186935 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.186943 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.186948 1128964 system_pods.go:61] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.186953 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.187013 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.187029 1128964 system_pods.go:61] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.187041 1128964 system_pods.go:74] duration metric: took 174.283401ms to wait for pod list to return data ...
	I0318 14:26:41.187054 1128964 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:41.381195 1128964 default_sa.go:45] found service account: "default"
	I0318 14:26:41.381238 1128964 default_sa.go:55] duration metric: took 194.17219ms for default service account to be created ...
	I0318 14:26:41.381252 1128964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:41.584896 1128964 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:41.584934 1128964 system_pods.go:89] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.584940 1128964 system_pods.go:89] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.584945 1128964 system_pods.go:89] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.584952 1128964 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.584957 1128964 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.584961 1128964 system_pods.go:89] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.584965 1128964 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.584974 1128964 system_pods.go:89] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.584980 1128964 system_pods.go:89] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.584996 1128964 system_pods.go:126] duration metric: took 203.730421ms to wait for k8s-apps to be running ...
	I0318 14:26:41.585011 1128964 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:41.585065 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:41.602211 1128964 system_svc.go:56] duration metric: took 17.185915ms WaitForService to wait for kubelet
	I0318 14:26:41.602253 1128964 kubeadm.go:576] duration metric: took 5.703881545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:41.602283 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:41.781292 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:41.781321 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:41.781333 1128964 node_conditions.go:105] duration metric: took 179.044515ms to run NodePressure ...
	I0318 14:26:41.781345 1128964 start.go:240] waiting for startup goroutines ...
	I0318 14:26:41.781352 1128964 start.go:245] waiting for cluster config update ...
	I0318 14:26:41.781363 1128964 start.go:254] writing updated cluster config ...
	I0318 14:26:41.781670 1128964 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:41.845950 1128964 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:41.848522 1128964 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-075922" cluster and "default" namespace by default
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:11.668940 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.375539998s)
	I0318 14:27:11.669036 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:11.687767 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:27:11.699135 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:11.710896 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:11.710924 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:11.710971 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:11.721562 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:11.721638 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:11.733335 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:11.744643 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:11.744724 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:11.755801 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.766424 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:11.766515 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.777734 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:11.788887 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:11.788972 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:11.800792 1128583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:11.858933 1128583 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:27:11.859030 1128583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:27:12.029485 1128583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:27:12.029703 1128583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:27:12.029833 1128583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:27:12.279174 1128583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:27:12.281285 1128583 out.go:204]   - Generating certificates and keys ...
	I0318 14:27:12.281400 1128583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:27:12.281507 1128583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:27:12.281633 1128583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:27:12.281726 1128583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:27:12.281844 1128583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:27:12.281938 1128583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:27:12.282031 1128583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:27:12.282121 1128583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:27:12.282218 1128583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:27:12.282325 1128583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:27:12.282392 1128583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:27:12.282470 1128583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:27:12.605106 1128583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:27:12.950706 1128583 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:27:13.067948 1128583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:27:13.340677 1128583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:27:13.393147 1128583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:27:13.393891 1128583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:27:13.396474 1128583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:27:13.398563 1128583 out.go:204]   - Booting up control plane ...
	I0318 14:27:13.398698 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:27:13.398814 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:27:13.398900 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:27:13.422155 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:27:13.423529 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:27:13.423626 1128583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:27:13.568295 1128583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:27:19.571958 1128583 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003509 seconds
	I0318 14:27:19.587644 1128583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:27:19.607417 1128583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:27:20.153253 1128583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:27:20.153526 1128583 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-188109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:27:20.671613 1128583 kubeadm.go:309] [bootstrap-token] Using token: oq5d1l.24j9td8ex727h998
	I0318 14:27:20.673250 1128583 out.go:204]   - Configuring RBAC rules ...
	I0318 14:27:20.673402 1128583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:27:20.680765 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:27:20.693884 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:27:20.698696 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:27:20.702572 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:27:20.710027 1128583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:27:20.725068 1128583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:27:20.981178 1128583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:27:21.104335 1128583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:27:21.107428 1128583 kubeadm.go:309] 
	I0318 14:27:21.107550 1128583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:27:21.107596 1128583 kubeadm.go:309] 
	I0318 14:27:21.107725 1128583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:27:21.107750 1128583 kubeadm.go:309] 
	I0318 14:27:21.107796 1128583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:27:21.107894 1128583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:27:21.107995 1128583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:27:21.108030 1128583 kubeadm.go:309] 
	I0318 14:27:21.108127 1128583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:27:21.108145 1128583 kubeadm.go:309] 
	I0318 14:27:21.108228 1128583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:27:21.108242 1128583 kubeadm.go:309] 
	I0318 14:27:21.108318 1128583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:27:21.108400 1128583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:27:21.108487 1128583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:27:21.108503 1128583 kubeadm.go:309] 
	I0318 14:27:21.108628 1128583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:27:21.108730 1128583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:27:21.108741 1128583 kubeadm.go:309] 
	I0318 14:27:21.108839 1128583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.108968 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:27:21.109031 1128583 kubeadm.go:309] 	--control-plane 
	I0318 14:27:21.109054 1128583 kubeadm.go:309] 
	I0318 14:27:21.109176 1128583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:27:21.109195 1128583 kubeadm.go:309] 
	I0318 14:27:21.109298 1128583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.109455 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:27:21.114992 1128583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:21.115128 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:27:21.115151 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:27:21.116940 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:27:21.118320 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:27:21.167945 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:27:21.256429 1128583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-188109 minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=no-preload-188109 minikube.k8s.io/primary=true
	I0318 14:27:21.315419 1128583 ops.go:34] apiserver oom_adj: -16
	I0318 14:27:21.530472 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.030814 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.531214 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.030869 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.530677 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.031137 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.531400 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.031455 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.530648 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.031501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.531399 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.031109 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.531261 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.030757 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.531295 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.030505 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.531501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.030996 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.530490 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.030520 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.531340 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.031217 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.531425 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.031231 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.531300 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.678904 1128583 kubeadm.go:1107] duration metric: took 12.422463336s to wait for elevateKubeSystemPrivileges
	W0318 14:27:33.678959 1128583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:27:33.678972 1128583 kubeadm.go:393] duration metric: took 5m17.305262011s to StartCluster
	I0318 14:27:33.678999 1128583 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.679119 1128583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:27:33.681595 1128583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.681893 1128583 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:27:33.683724 1128583 out.go:177] * Verifying Kubernetes components...
	I0318 14:27:33.682059 1128583 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:27:33.682122 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:27:33.685123 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:27:33.685131 1128583 addons.go:69] Setting default-storageclass=true in profile "no-preload-188109"
	I0318 14:27:33.685135 1128583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188109"
	I0318 14:27:33.685139 1128583 addons.go:69] Setting metrics-server=true in profile "no-preload-188109"
	I0318 14:27:33.685165 1128583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188109"
	I0318 14:27:33.685173 1128583 addons.go:234] Setting addon metrics-server=true in "no-preload-188109"
	I0318 14:27:33.685175 1128583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-188109"
	W0318 14:27:33.685182 1128583 addons.go:243] addon metrics-server should already be in state true
	W0318 14:27:33.685185 1128583 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:27:33.685231 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685238 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685573 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685575 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685613 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685617 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685629 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685637 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.703022 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0318 14:27:33.703262 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 14:27:33.703844 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704181 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704628 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704649 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.704715 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704736 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.705213 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705374 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705809 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705863 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.705911 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705987 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.706076 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0318 14:27:33.706558 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.707198 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.707222 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.707699 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.708354 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.712289 1128583 addons.go:234] Setting addon default-storageclass=true in "no-preload-188109"
	W0318 14:27:33.712323 1128583 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:27:33.712364 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.712795 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.712833 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.724381 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0318 14:27:33.724980 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.725587 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.725614 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.726054 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.726363 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.727777 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0318 14:27:33.728182 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.728497 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.730538 1128583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:27:33.729152 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.730851 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0318 14:27:33.732037 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:27:33.732055 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:27:33.732076 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.732113 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.732489 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.732593 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.732881 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.732979 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.732991 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.733604 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.734297 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.734329 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.735399 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.737266 1128583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:27:33.735988 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.736830 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.739081 1128583 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:33.739098 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:27:33.737327 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.739122 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.739142 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.740009 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.740263 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.740482 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.742702 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743181 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.743211 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743473 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.743706 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.743902 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.744097 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.752903 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0318 14:27:33.756275 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.756901 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.756932 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.757363 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.757603 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.759471 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.759732 1128583 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:33.759751 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:27:33.759772 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.762687 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763139 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.763162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763414 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.763599 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.763765 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.763919 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.942490 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:27:33.975796 1128583 node_ready.go:35] waiting up to 6m0s for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008100 1128583 node_ready.go:49] node "no-preload-188109" has status "Ready":"True"
	I0318 14:27:34.008135 1128583 node_ready.go:38] duration metric: took 32.281068ms for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008149 1128583 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:34.039370 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:34.067765 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:27:34.067798 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:27:34.088294 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:34.091931 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:34.121689 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:27:34.121722 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:27:34.183609 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:34.183638 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:27:34.264906 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:35.590900 1128583 pod_ready.go:92] pod "coredns-76f75df574-jk9v5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.590928 1128583 pod_ready.go:81] duration metric: took 1.551526097s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.590938 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605647 1128583 pod_ready.go:92] pod "coredns-76f75df574-xczpc" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.605675 1128583 pod_ready.go:81] duration metric: took 14.730232ms for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605685 1128583 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.613213 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.521243904s)
	I0318 14:27:35.613276 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613289 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613282 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.524948587s)
	I0318 14:27:35.613324 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.613811 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613813 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.613824 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613831 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614119 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614166 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614183 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.614191 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614192 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.614234 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614273 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614502 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614517 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.636576 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.636610 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.636920 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.636946 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.656945 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.656972 1128583 pod_ready.go:81] duration metric: took 51.280554ms for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.656983 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683260 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.683291 1128583 pod_ready.go:81] duration metric: took 26.301625ms for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683301 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.691855 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42688194s)
	I0318 14:27:35.691918 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.691934 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692300 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692325 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692336 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.692344 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692661 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.692701 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692709 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692721 1128583 addons.go:470] Verifying addon metrics-server=true in "no-preload-188109"
	I0318 14:27:35.694758 1128583 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:27:35.696004 1128583 addons.go:505] duration metric: took 2.013954954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:27:35.709010 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.709035 1128583 pod_ready.go:81] duration metric: took 25.726967ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.709045 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982032 1128583 pod_ready.go:92] pod "kube-proxy-qpxx5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.982080 1128583 pod_ready.go:81] duration metric: took 273.026354ms for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982094 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380184 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:36.380228 1128583 pod_ready.go:81] duration metric: took 398.123566ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380241 1128583 pod_ready.go:38] duration metric: took 2.372078145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:36.380264 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:27:36.380334 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:27:36.401316 1128583 api_server.go:72] duration metric: took 2.719374991s to wait for apiserver process to appear ...
	I0318 14:27:36.401358 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:27:36.401389 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:27:36.407212 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:27:36.408930 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:27:36.408966 1128583 api_server.go:131] duration metric: took 7.597771ms to wait for apiserver health ...
	I0318 14:27:36.408989 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:27:36.583053 1128583 system_pods.go:59] 9 kube-system pods found
	I0318 14:27:36.583099 1128583 system_pods.go:61] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.583107 1128583 system_pods.go:61] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.583112 1128583 system_pods.go:61] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.583116 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.583120 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.583123 1128583 system_pods.go:61] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.583127 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.583134 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.583138 1128583 system_pods.go:61] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.583147 1128583 system_pods.go:74] duration metric: took 174.139423ms to wait for pod list to return data ...
	I0318 14:27:36.583156 1128583 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:27:36.779733 1128583 default_sa.go:45] found service account: "default"
	I0318 14:27:36.779771 1128583 default_sa.go:55] duration metric: took 196.607194ms for default service account to be created ...
	I0318 14:27:36.779783 1128583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:27:36.982750 1128583 system_pods.go:86] 9 kube-system pods found
	I0318 14:27:36.982783 1128583 system_pods.go:89] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.982789 1128583 system_pods.go:89] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.982793 1128583 system_pods.go:89] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.982798 1128583 system_pods.go:89] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.982804 1128583 system_pods.go:89] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.982808 1128583 system_pods.go:89] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.982812 1128583 system_pods.go:89] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.982819 1128583 system_pods.go:89] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.982823 1128583 system_pods.go:89] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.982832 1128583 system_pods.go:126] duration metric: took 203.042771ms to wait for k8s-apps to be running ...
	I0318 14:27:36.982839 1128583 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:27:36.982902 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:37.000948 1128583 system_svc.go:56] duration metric: took 18.09435ms WaitForService to wait for kubelet
	I0318 14:27:37.000980 1128583 kubeadm.go:576] duration metric: took 3.319049387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:27:37.001005 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:27:37.180608 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:27:37.180639 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:27:37.180652 1128583 node_conditions.go:105] duration metric: took 179.641912ms to run NodePressure ...
	I0318 14:27:37.180665 1128583 start.go:240] waiting for startup goroutines ...
	I0318 14:27:37.180672 1128583 start.go:245] waiting for cluster config update ...
	I0318 14:27:37.180686 1128583 start.go:254] writing updated cluster config ...
	I0318 14:27:37.181004 1128583 ssh_runner.go:195] Run: rm -f paused
	I0318 14:27:37.236286 1128583 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:27:37.238455 1128583 out.go:177] * Done! kubectl is now configured to use "no-preload-188109" cluster and "default" namespace by default
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.828523289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772186828487686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30f48a8a-fe40-41ec-9646-7dc45895245d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.831875276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9fa1768-6ce1-42ef-8b8d-a14656a06680 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.831935011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9fa1768-6ce1-42ef-8b8d-a14656a06680 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.831967289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d9fa1768-6ce1-42ef-8b8d-a14656a06680 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.870618800Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=510b4b53-17dd-42f5-a79d-dccb29a3e518 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.870702014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=510b4b53-17dd-42f5-a79d-dccb29a3e518 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.871847186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9c638ee-29fd-4cd5-965b-0e6551f43acd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.872481629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772186872454955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9c638ee-29fd-4cd5-965b-0e6551f43acd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.873054181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23e80ba0-5f3f-4f70-9a6c-01db7cf2d97e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.873110852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23e80ba0-5f3f-4f70-9a6c-01db7cf2d97e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.873155091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=23e80ba0-5f3f-4f70-9a6c-01db7cf2d97e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.909048385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=553bd35c-3e8e-4f56-a531-1953cd23a72a name=/runtime.v1.RuntimeService/Version
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.909123592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=553bd35c-3e8e-4f56-a531-1953cd23a72a name=/runtime.v1.RuntimeService/Version
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.910369317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceb6141e-4a28-4891-8ca9-02808223e787 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.910711753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772186910691769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceb6141e-4a28-4891-8ca9-02808223e787 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.911342904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9ece77b-133d-4264-bf14-e0def313e862 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.911397298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9ece77b-133d-4264-bf14-e0def313e862 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.911428038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f9ece77b-133d-4264-bf14-e0def313e862 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.953225110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f105a4bb-29ee-4d08-90ca-18184ed7b4b8 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.953386402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f105a4bb-29ee-4d08-90ca-18184ed7b4b8 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.954838764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06a508ec-d2d6-4dfe-b3e2-90e4a839e00e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.955323145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772186955237744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06a508ec-d2d6-4dfe-b3e2-90e4a839e00e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.956001474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7289bfd-e538-40ee-b92c-5123462b60b3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.956051054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7289bfd-e538-40ee-b92c-5123462b60b3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:29:46 old-k8s-version-782728 crio[653]: time="2024-03-18 14:29:46.956083456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e7289bfd-e538-40ee-b92c-5123462b60b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 14:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052875] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041790] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922199] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.676692] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.118921] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062985] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068114] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.236009] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.138019] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.296452] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.964415] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.070819] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.228114] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[  +9.153953] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 14:25] systemd-fstab-generator[4965]: Ignoring "noauto" option for root device
	[Mar18 14:27] systemd-fstab-generator[5242]: Ignoring "noauto" option for root device
	[  +0.076165] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:29:47 up 8 min,  0 users,  load average: 0.09, 0.17, 0.11
	Linux old-k8s-version-782728 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000b4d9e0)
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: goroutine 159 [select]:
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009adef0, 0x4f0ac20, 0xc00057bb80, 0x1, 0xc0001000c0)
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024e7e0, 0xc0001000c0)
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00083a9c0, 0xc000b5fb20)
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 18 14:29:44 old-k8s-version-782728 kubelet[5424]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 18 14:29:44 old-k8s-version-782728 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 14:29:44 old-k8s-version-782728 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 14:29:44 old-k8s-version-782728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 18 14:29:44 old-k8s-version-782728 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 14:29:44 old-k8s-version-782728 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 14:29:45 old-k8s-version-782728 kubelet[5483]: I0318 14:29:45.008124    5483 server.go:416] Version: v1.20.0
	Mar 18 14:29:45 old-k8s-version-782728 kubelet[5483]: I0318 14:29:45.008867    5483 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 14:29:45 old-k8s-version-782728 kubelet[5483]: I0318 14:29:45.012788    5483 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 14:29:45 old-k8s-version-782728 kubelet[5483]: W0318 14:29:45.014144    5483 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 18 14:29:45 old-k8s-version-782728 kubelet[5483]: I0318 14:29:45.014158    5483 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (274.210438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-782728" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (747.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0318 14:26:25.564704 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767719 -n embed-certs-767719
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:35:20.846350951 +0000 UTC m=+6638.639810157
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-767719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-767719 logs -n 25: (2.216582999s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo find                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo find                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:17:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:24.228169 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:17:27.300185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:33.380164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:36.452102 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:42.536087 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:45.604211 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:51.684168 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:54.756227 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:00.836108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:03.908246 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:09.988223 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:13.060123 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:19.140179 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:22.212209 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:28.292206 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:31.364121 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:37.444195 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:40.516108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:46.596160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:49.668120 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:55.748134 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:58.820202 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:04.900183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:07.972128 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:14.052140 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:17.124242 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:23.204175 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:26.276172 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:32.356183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:35.428256 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:41.508181 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:44.580142 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:50.660193 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:53.732160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:59.812151 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:02.884164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:08.964174 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:12.036185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:18.116178 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:21.188147 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:27.268137 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:30.340177 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:33.345074 1128788 start.go:364] duration metric: took 4m12.599457373s to acquireMachinesLock for "embed-certs-767719"
	I0318 14:20:33.345136 1128788 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:33.345145 1128788 fix.go:54] fixHost starting: 
	I0318 14:20:33.345584 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:33.345638 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:33.362007 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0318 14:20:33.362504 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:33.363014 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:20:33.363037 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:33.363432 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:33.363634 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:33.363787 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:20:33.365593 1128788 fix.go:112] recreateIfNeeded on embed-certs-767719: state=Stopped err=<nil>
	I0318 14:20:33.365619 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	W0318 14:20:33.365792 1128788 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:33.367525 1128788 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767719" ...
	I0318 14:20:33.368930 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Start
	I0318 14:20:33.369145 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring networks are active...
	I0318 14:20:33.370041 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network default is active
	I0318 14:20:33.370474 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network mk-embed-certs-767719 is active
	I0318 14:20:33.370832 1128788 main.go:141] libmachine: (embed-certs-767719) Getting domain xml...
	I0318 14:20:33.371609 1128788 main.go:141] libmachine: (embed-certs-767719) Creating domain...
	I0318 14:20:34.596425 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting to get IP...
	I0318 14:20:34.597292 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.597677 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.597753 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.597666 1130210 retry.go:31] will retry after 244.312377ms: waiting for machine to come up
	I0318 14:20:34.843360 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.844039 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.844082 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.843988 1130210 retry.go:31] will retry after 388.782007ms: waiting for machine to come up
	I0318 14:20:35.234931 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.235304 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.235334 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.235252 1130210 retry.go:31] will retry after 449.871291ms: waiting for machine to come up
	I0318 14:20:33.342334 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:33.342408 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.342790 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:20:33.342823 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.343061 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:20:33.344920 1128583 machine.go:97] duration metric: took 4m37.408911801s to provisionDockerMachine
	I0318 14:20:33.344982 1128583 fix.go:56] duration metric: took 4m37.431584024s for fixHost
	I0318 14:20:33.344992 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 4m37.431613044s
	W0318 14:20:33.345017 1128583 start.go:713] error starting host: provision: host is not running
	W0318 14:20:33.345209 1128583 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 14:20:33.345223 1128583 start.go:728] Will try again in 5 seconds ...
	I0318 14:20:35.687048 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.687565 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.687604 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.687508 1130210 retry.go:31] will retry after 470.225551ms: waiting for machine to come up
	I0318 14:20:36.159138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.159642 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.159668 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.159590 1130210 retry.go:31] will retry after 638.634635ms: waiting for machine to come up
	I0318 14:20:36.799431 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.799820 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.799857 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.799764 1130210 retry.go:31] will retry after 758.659569ms: waiting for machine to come up
	I0318 14:20:37.559752 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:37.560189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:37.560224 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:37.560116 1130210 retry.go:31] will retry after 1.163344023s: waiting for machine to come up
	I0318 14:20:38.724981 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:38.725498 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:38.725561 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:38.725341 1130210 retry.go:31] will retry after 1.155934539s: waiting for machine to come up
	I0318 14:20:39.882622 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:39.883025 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:39.883074 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:39.882966 1130210 retry.go:31] will retry after 1.832023161s: waiting for machine to come up
	I0318 14:20:38.347296 1128583 start.go:360] acquireMachinesLock for no-preload-188109: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:20:41.717138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:41.717723 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:41.717757 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:41.717642 1130210 retry.go:31] will retry after 1.526824443s: waiting for machine to come up
	I0318 14:20:43.246389 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:43.246960 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:43.246997 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:43.246901 1130210 retry.go:31] will retry after 2.608273558s: waiting for machine to come up
	I0318 14:20:45.858375 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:45.858919 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:45.858943 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:45.858871 1130210 retry.go:31] will retry after 2.272908905s: waiting for machine to come up
	I0318 14:20:48.134345 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:48.134774 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:48.134826 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:48.134739 1130210 retry.go:31] will retry after 3.671073699s: waiting for machine to come up
	I0318 14:20:53.273198 1128964 start.go:364] duration metric: took 4m11.791347901s to acquireMachinesLock for "default-k8s-diff-port-075922"
	I0318 14:20:53.273284 1128964 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:53.273295 1128964 fix.go:54] fixHost starting: 
	I0318 14:20:53.273834 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:53.273879 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:53.291440 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0318 14:20:53.291988 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:53.292571 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:20:53.292605 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:53.292931 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:53.293125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:20:53.293278 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:20:53.294856 1128964 fix.go:112] recreateIfNeeded on default-k8s-diff-port-075922: state=Stopped err=<nil>
	I0318 14:20:53.294889 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	W0318 14:20:53.295063 1128964 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:53.297784 1128964 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075922" ...
	I0318 14:20:51.809859 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.810477 1128788 main.go:141] libmachine: (embed-certs-767719) Found IP for machine: 192.168.72.45
	I0318 14:20:51.810503 1128788 main.go:141] libmachine: (embed-certs-767719) Reserving static IP address...
	I0318 14:20:51.810518 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has current primary IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.811061 1128788 main.go:141] libmachine: (embed-certs-767719) Reserved static IP address: 192.168.72.45
	I0318 14:20:51.811104 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.811112 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting for SSH to be available...
	I0318 14:20:51.811137 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | skip adding static IP to network mk-embed-certs-767719 - found existing host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"}
	I0318 14:20:51.811163 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Getting to WaitForSSH function...
	I0318 14:20:51.813739 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814076 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.814121 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH client type: external
	I0318 14:20:51.814225 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa (-rw-------)
	I0318 14:20:51.814282 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:20:51.814327 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | About to run SSH command:
	I0318 14:20:51.814346 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | exit 0
	I0318 14:20:51.944192 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | SSH cmd err, output: <nil>: 
	I0318 14:20:51.944624 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetConfigRaw
	I0318 14:20:51.945477 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:51.948244 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.948667 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.948711 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.949069 1128788 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:20:51.949305 1128788 machine.go:94] provisionDockerMachine start ...
	I0318 14:20:51.949327 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:51.949596 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:51.952267 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952653 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.952703 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952836 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:51.953047 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953200 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953376 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:51.953525 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:51.953772 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:51.953785 1128788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:20:52.068806 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:20:52.068847 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069162 1128788 buildroot.go:166] provisioning hostname "embed-certs-767719"
	I0318 14:20:52.069198 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069500 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.072258 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072750 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.072785 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072939 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.073146 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073312 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073492 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.073730 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.073916 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.073934 1128788 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767719 && echo "embed-certs-767719" | sudo tee /etc/hostname
	I0318 14:20:52.204197 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767719
	
	I0318 14:20:52.204258 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.207520 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.207927 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.207959 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.208178 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.208478 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208740 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208961 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.209164 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.209352 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.209370 1128788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767719/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:20:52.337185 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:52.337220 1128788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:20:52.337243 1128788 buildroot.go:174] setting up certificates
	I0318 14:20:52.337253 1128788 provision.go:84] configureAuth start
	I0318 14:20:52.337264 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.337561 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:52.340693 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341061 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.341098 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341280 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.343239 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343570 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.343595 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343709 1128788 provision.go:143] copyHostCerts
	I0318 14:20:52.343782 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:20:52.343794 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:20:52.343888 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:20:52.344001 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:20:52.344010 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:20:52.344038 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:20:52.344095 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:20:52.344103 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:20:52.344126 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:20:52.344220 1128788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767719 san=[127.0.0.1 192.168.72.45 embed-certs-767719 localhost minikube]
	I0318 14:20:52.550241 1128788 provision.go:177] copyRemoteCerts
	I0318 14:20:52.550380 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:20:52.550433 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.553182 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553591 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.553626 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553824 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.554056 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.554241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.554392 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:52.645341 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:20:52.672476 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:20:52.698609 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:20:52.724434 1128788 provision.go:87] duration metric: took 387.165868ms to configureAuth
	I0318 14:20:52.724471 1128788 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:20:52.724727 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:20:52.724827 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.727323 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727700 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.727764 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727882 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.728098 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728443 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.728626 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.728859 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.728878 1128788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:20:53.012918 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:20:53.012959 1128788 machine.go:97] duration metric: took 1.063639009s to provisionDockerMachine
	I0318 14:20:53.012976 1128788 start.go:293] postStartSetup for "embed-certs-767719" (driver="kvm2")
	I0318 14:20:53.012990 1128788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:20:53.013039 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.013471 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:20:53.013505 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.016524 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.016929 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.016961 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.017153 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.017372 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.017582 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.017846 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.107977 1128788 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:20:53.113146 1128788 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:20:53.113184 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:20:53.113302 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:20:53.113423 1128788 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:20:53.113558 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:20:53.125166 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:53.152094 1128788 start.go:296] duration metric: took 139.099686ms for postStartSetup
	I0318 14:20:53.152147 1128788 fix.go:56] duration metric: took 19.807001958s for fixHost
	I0318 14:20:53.152194 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.155058 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155371 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.155401 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155643 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.155908 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156138 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156307 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.156536 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:53.156770 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:53.156786 1128788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:20:53.272998 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771653.240528844
	
	I0318 14:20:53.273029 1128788 fix.go:216] guest clock: 1710771653.240528844
	I0318 14:20:53.273046 1128788 fix.go:229] Guest: 2024-03-18 14:20:53.240528844 +0000 UTC Remote: 2024-03-18 14:20:53.15215228 +0000 UTC m=+272.563569050 (delta=88.376564ms)
	I0318 14:20:53.273075 1128788 fix.go:200] guest clock delta is within tolerance: 88.376564ms
	I0318 14:20:53.273083 1128788 start.go:83] releasing machines lock for "embed-certs-767719", held for 19.927965733s
	I0318 14:20:53.273118 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.273431 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:53.276309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276740 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.276768 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276958 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277493 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277716 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277806 1128788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:20:53.277851 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.277976 1128788 ssh_runner.go:195] Run: cat /version.json
	I0318 14:20:53.278002 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.280799 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.280853 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281234 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281263 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281289 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281518 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281616 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281767 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281850 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281945 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282028 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282090 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.282179 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.386584 1128788 ssh_runner.go:195] Run: systemctl --version
	I0318 14:20:53.393371 1128788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:20:53.547565 1128788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:20:53.554182 1128788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:20:53.554266 1128788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:20:53.573031 1128788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:20:53.573071 1128788 start.go:494] detecting cgroup driver to use...
	I0318 14:20:53.573197 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:20:53.591649 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:20:53.607279 1128788 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:20:53.607359 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:20:53.624327 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:20:53.640398 1128788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:20:53.759979 1128788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:20:53.931294 1128788 docker.go:233] disabling docker service ...
	I0318 14:20:53.931381 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:20:53.954433 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:20:53.969396 1128788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:20:54.107898 1128788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:20:54.241874 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:20:54.257748 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:20:54.278981 1128788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:20:54.279057 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.293329 1128788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:20:54.293390 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.304838 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.316646 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.328623 1128788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:20:54.340540 1128788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:20:54.352368 1128788 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:20:54.352433 1128788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:20:54.368965 1128788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:20:54.389268 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:54.511182 1128788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:20:54.657685 1128788 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:20:54.657798 1128788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:20:54.663591 1128788 start.go:562] Will wait 60s for crictl version
	I0318 14:20:54.663670 1128788 ssh_runner.go:195] Run: which crictl
	I0318 14:20:54.667903 1128788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:20:54.707961 1128788 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:20:54.708065 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.738240 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.773562 1128788 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:20:54.775286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:54.778784 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779228 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:54.779265 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779498 1128788 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:20:54.784575 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:54.799207 1128788 kubeadm.go:877] updating cluster {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:20:54.799380 1128788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:20:54.799440 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:54.839309 1128788 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:20:54.839387 1128788 ssh_runner.go:195] Run: which lz4
	I0318 14:20:54.844323 1128788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:20:54.850487 1128788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:20:54.850524 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:20:53.299380 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Start
	I0318 14:20:53.299595 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring networks are active...
	I0318 14:20:53.300497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network default is active
	I0318 14:20:53.300887 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network mk-default-k8s-diff-port-075922 is active
	I0318 14:20:53.301316 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Getting domain xml...
	I0318 14:20:53.302079 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Creating domain...
	I0318 14:20:54.607619 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting to get IP...
	I0318 14:20:54.608510 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609075 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.609050 1130331 retry.go:31] will retry after 282.377323ms: waiting for machine to come up
	I0318 14:20:54.892766 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893323 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.893259 1130331 retry.go:31] will retry after 264.840581ms: waiting for machine to come up
	I0318 14:20:55.160018 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160536 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160578 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.160460 1130331 retry.go:31] will retry after 402.458985ms: waiting for machine to come up
	I0318 14:20:55.564282 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564773 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.564727 1130331 retry.go:31] will retry after 382.70672ms: waiting for machine to come up
	I0318 14:20:55.949676 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950183 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950218 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.950122 1130331 retry.go:31] will retry after 676.466466ms: waiting for machine to come up
	I0318 14:20:56.798325 1128788 crio.go:444] duration metric: took 1.954051074s to copy over tarball
	I0318 14:20:56.798418 1128788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:20:59.431722 1128788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633260911s)
	I0318 14:20:59.431777 1128788 crio.go:451] duration metric: took 2.633417573s to extract the tarball
	I0318 14:20:59.431788 1128788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:20:59.476265 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:59.534130 1128788 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:20:59.534161 1128788 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:20:59.534173 1128788 kubeadm.go:928] updating node { 192.168.72.45 8443 v1.28.4 crio true true} ...
	I0318 14:20:59.534357 1128788 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:20:59.534499 1128788 ssh_runner.go:195] Run: crio config
	I0318 14:20:59.594778 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:20:59.594814 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:20:59.594831 1128788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:20:59.594894 1128788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767719 NodeName:embed-certs-767719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:20:59.595092 1128788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:20:59.595203 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:20:59.610298 1128788 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:20:59.610388 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:20:59.624050 1128788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 14:20:59.644283 1128788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:20:59.663987 1128788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0318 14:20:59.685379 1128788 ssh_runner.go:195] Run: grep 192.168.72.45	control-plane.minikube.internal$ /etc/hosts
	I0318 14:20:59.690360 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:59.705657 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:59.839158 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:20:59.857617 1128788 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719 for IP: 192.168.72.45
	I0318 14:20:59.857642 1128788 certs.go:194] generating shared ca certs ...
	I0318 14:20:59.857674 1128788 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:20:59.857839 1128788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:20:59.857882 1128788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:20:59.857893 1128788 certs.go:256] generating profile certs ...
	I0318 14:20:59.858006 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/client.key
	I0318 14:20:59.858061 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key.f59f641c
	I0318 14:20:59.858098 1128788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key
	I0318 14:20:59.858268 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:20:59.858301 1128788 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:20:59.858308 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:20:59.858331 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:20:59.858360 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:20:59.858382 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:20:59.858424 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:59.859110 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:20:59.901101 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:20:59.947010 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:20:59.990882 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:00.032358 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 14:21:00.070194 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:00.108670 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:00.137760 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:00.168481 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:00.199292 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:00.228315 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:00.257409 1128788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:00.277720 1128788 ssh_runner.go:195] Run: openssl version
	I0318 14:21:00.284138 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:00.296443 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302083 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302160 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.308748 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:00.322025 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:00.334654 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340319 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340404 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.347454 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:00.359627 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:00.371865 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377236 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377335 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.387041 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:00.404525 1128788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:00.412919 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:00.422577 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:00.434217 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:00.444535 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:00.452863 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:00.459979 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:00.467503 1128788 kubeadm.go:391] StartCluster: {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:00.467680 1128788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:00.467780 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.507833 1128788 cri.go:89] found id: ""
	I0318 14:21:00.507926 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:00.519958 1128788 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:00.519982 1128788 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:00.520011 1128788 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:00.520066 1128788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:00.532229 1128788 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:00.533479 1128788 kubeconfig.go:125] found "embed-certs-767719" server: "https://192.168.72.45:8443"
	I0318 14:21:00.536185 1128788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:00.548434 1128788 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.45
	I0318 14:21:00.548484 1128788 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:00.548498 1128788 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:00.548551 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.592096 1128788 cri.go:89] found id: ""
	I0318 14:21:00.592168 1128788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:00.610826 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:00.622294 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:00.622330 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:00.622386 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:00.633009 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:00.633089 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:20:56.628134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628708 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628747 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:56.628643 1130331 retry.go:31] will retry after 703.45784ms: waiting for machine to come up
	I0318 14:20:57.334203 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334666 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334702 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:57.334600 1130331 retry.go:31] will retry after 1.177266521s: waiting for machine to come up
	I0318 14:20:58.513803 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514452 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514485 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:58.514389 1130331 retry.go:31] will retry after 1.389627955s: waiting for machine to come up
	I0318 14:20:59.906109 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906663 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906750 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:59.906632 1130331 retry.go:31] will retry after 1.239662517s: waiting for machine to come up
	I0318 14:21:01.147929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148325 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:01.148248 1130331 retry.go:31] will retry after 2.183067358s: waiting for machine to come up
	I0318 14:21:00.644684 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:00.921213 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:00.921307 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:00.932412 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.943408 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:00.943481 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.955574 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:00.966416 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:00.966483 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:00.978014 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:00.993622 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:01.128726 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.331974 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.203164646s)
	I0318 14:21:02.332035 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.574592 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.686011 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.821189 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:02.821373 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.322200 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.822207 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.838586 1128788 api_server.go:72] duration metric: took 1.017395673s to wait for apiserver process to appear ...
	I0318 14:21:03.838622 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:03.838660 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.839282 1128788 api_server.go:269] stopped: https://192.168.72.45:8443/healthz: Get "https://192.168.72.45:8443/healthz": dial tcp 192.168.72.45:8443: connect: connection refused
	I0318 14:21:04.339675 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.333080 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333620 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333648 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:03.333583 1130331 retry.go:31] will retry after 2.259124316s: waiting for machine to come up
	I0318 14:21:05.594356 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594823 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:05.594754 1130331 retry.go:31] will retry after 2.492274875s: waiting for machine to come up
	I0318 14:21:07.054330 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:07.054373 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:07.054392 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.073841 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.073894 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.339285 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.345307 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.345340 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.838915 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.846722 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.846759 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:08.339409 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:08.344790 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:21:08.358050 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:08.358097 1128788 api_server.go:131] duration metric: took 4.519466088s to wait for apiserver health ...
	I0318 14:21:08.358121 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:21:08.358130 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:08.359982 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:08.361428 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:08.378195 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:08.409269 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:08.421874 1128788 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:08.421960 1128788 system_pods.go:61] "coredns-5dd5756b68-4dmw2" [324897fc-dd26-47f1-b8bc-4d2ed721a576] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:08.421971 1128788 system_pods.go:61] "etcd-embed-certs-767719" [df147cb8-989c-408d-ade8-547858a8c2bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:08.421982 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [82f7d170-3b3c-448c-b824-6d263c5c1128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:08.421989 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [cd4dd4f3-a727-4864-b0e9-a89758537de9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:08.422002 1128788 system_pods.go:61] "kube-proxy-mtx9w" [b46b48ff-e4c0-4595-82c4-7c0c86103262] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:08.422010 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [63774f42-c85e-467f-9bd3-0c78d44b2681] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:08.422022 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-jr9wp" [e40748e2-ebc3-4c4f-a9cc-01bbc7416f35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:08.422030 1128788 system_pods.go:61] "storage-provisioner" [1b51e6a7-2693-4d0b-b47e-ccbcb1e46424] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:08.422047 1128788 system_pods.go:74] duration metric: took 12.746875ms to wait for pod list to return data ...
	I0318 14:21:08.422058 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:08.432361 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:08.432461 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:08.432483 1128788 node_conditions.go:105] duration metric: took 10.415127ms to run NodePressure ...
	I0318 14:21:08.432524 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:08.730544 1128788 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:08.735970 1128788 kubeadm.go:733] kubelet initialised
	I0318 14:21:08.736001 1128788 kubeadm.go:734] duration metric: took 5.422027ms waiting for restarted kubelet to initialise ...
	I0318 14:21:08.736042 1128788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:08.745586 1128788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:08.090446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090834 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:08.090779 1130331 retry.go:31] will retry after 3.31085892s: waiting for machine to come up
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:11.405935 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has current primary IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406539 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Found IP for machine: 192.168.83.39
	I0318 14:21:11.406553 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserving static IP address...
	I0318 14:21:11.407015 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.407048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | skip adding static IP to network mk-default-k8s-diff-port-075922 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"}
	I0318 14:21:11.407066 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserved static IP address: 192.168.83.39
	I0318 14:21:11.407081 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for SSH to be available...
	I0318 14:21:11.407093 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Getting to WaitForSSH function...
	I0318 14:21:11.409327 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409674 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.409706 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409895 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH client type: external
	I0318 14:21:11.409919 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa (-rw-------)
	I0318 14:21:11.410034 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:11.410065 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | About to run SSH command:
	I0318 14:21:11.410089 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | exit 0
	I0318 14:21:11.544258 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:11.544698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetConfigRaw
	I0318 14:21:11.545370 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.548333 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.548729 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.548764 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.549053 1128964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:21:11.549275 1128964 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:11.549295 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:11.549533 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.551799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552156 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.552186 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552280 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.552482 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552657 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552797 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.552994 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.553261 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.553278 1128964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:11.665093 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:11.665132 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665456 1128964 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075922"
	I0318 14:21:11.665493 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665730 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.668911 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.669413 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669679 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.669923 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670319 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.670530 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.670718 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.670734 1128964 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075922 && echo "default-k8s-diff-port-075922" | sudo tee /etc/hostname
	I0318 14:21:11.807520 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075922
	
	I0318 14:21:11.807552 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.810614 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811011 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.811047 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811257 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.811480 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811941 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.812155 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.812361 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.812387 1128964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075922/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:11.942984 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:11.943022 1128964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:11.943078 1128964 buildroot.go:174] setting up certificates
	I0318 14:21:11.943094 1128964 provision.go:84] configureAuth start
	I0318 14:21:11.943108 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.943441 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.946659 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947091 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.947125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947328 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.949852 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950275 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.950310 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950496 1128964 provision.go:143] copyHostCerts
	I0318 14:21:11.950579 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:11.950596 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:11.950679 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:11.950859 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:11.950873 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:11.950898 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:11.950964 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:11.950971 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:11.950988 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:11.951041 1128964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075922 san=[127.0.0.1 192.168.83.39 default-k8s-diff-port-075922 localhost minikube]
	I0318 14:21:12.019678 1128964 provision.go:177] copyRemoteCerts
	I0318 14:21:12.019756 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:12.019788 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.023122 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023603 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.023639 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023862 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.024077 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.024294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.024445 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.112914 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:12.142575 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 14:21:12.171747 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:12.200144 1128964 provision.go:87] duration metric: took 257.034667ms to configureAuth
	I0318 14:21:12.200177 1128964 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:12.200401 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:21:12.200515 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.203573 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.203978 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.204019 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.204160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.204379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204658 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.205131 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.205335 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.205367 1128964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:12.494965 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:12.494997 1128964 machine.go:97] duration metric: took 945.707691ms to provisionDockerMachine
	I0318 14:21:12.495012 1128964 start.go:293] postStartSetup for "default-k8s-diff-port-075922" (driver="kvm2")
	I0318 14:21:12.495026 1128964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:12.495048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.495450 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:12.495486 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.498444 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.498821 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498928 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.499166 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.499363 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.499560 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.588350 1128964 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:12.593611 1128964 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:12.593638 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:12.593714 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:12.593788 1128964 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:12.593875 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:12.605751 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:12.633577 1128964 start.go:296] duration metric: took 138.54984ms for postStartSetup
	I0318 14:21:12.633621 1128964 fix.go:56] duration metric: took 19.360327899s for fixHost
	I0318 14:21:12.633645 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.636446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636822 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.636850 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636989 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.637237 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637428 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637596 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.637786 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.637988 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.638002 1128964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:12.749326 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771672.727120819
	
	I0318 14:21:12.749355 1128964 fix.go:216] guest clock: 1710771672.727120819
	I0318 14:21:12.749364 1128964 fix.go:229] Guest: 2024-03-18 14:21:12.727120819 +0000 UTC Remote: 2024-03-18 14:21:12.633625447 +0000 UTC m=+271.308784721 (delta=93.495372ms)
	I0318 14:21:12.749386 1128964 fix.go:200] guest clock delta is within tolerance: 93.495372ms
	I0318 14:21:12.749392 1128964 start.go:83] releasing machines lock for "default-k8s-diff-port-075922", held for 19.476136638s
	I0318 14:21:12.749418 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.749732 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:12.752996 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753471 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.753506 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753815 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754448 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754651 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754744 1128964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:12.754791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.754943 1128964 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:12.754970 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.758153 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758303 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758628 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758660 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758694 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758758 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758927 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758988 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759057 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759157 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759251 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759292 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.759371 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.841423 1128964 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:12.868154 1128964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:13.020652 1128964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:13.028168 1128964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:13.028267 1128964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:13.047225 1128964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:13.047264 1128964 start.go:494] detecting cgroup driver to use...
	I0318 14:21:13.047361 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:13.064518 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:13.080271 1128964 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:13.080356 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:13.095583 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:13.110387 1128964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:13.250934 1128964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:13.450657 1128964 docker.go:233] disabling docker service ...
	I0318 14:21:13.450738 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:13.471701 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:13.488157 1128964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:13.644961 1128964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:13.811333 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:13.828584 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:13.852476 1128964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:13.852557 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.864849 1128964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:13.864951 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.877723 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.890337 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.902558 1128964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:13.915858 1128964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:13.928426 1128964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:13.928526 1128964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:13.951761 1128964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:13.964785 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:14.144432 1128964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:14.311928 1128964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:14.312078 1128964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:14.319279 1128964 start.go:562] Will wait 60s for crictl version
	I0318 14:21:14.319347 1128964 ssh_runner.go:195] Run: which crictl
	I0318 14:21:14.325325 1128964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:14.385244 1128964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:14.385344 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.426242 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.460725 1128964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:21:10.753176 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:12.756558 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:13.760252 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:13.760295 1128788 pod_ready.go:81] duration metric: took 5.014671723s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:13.760315 1128788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:14.461974 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:14.465116 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465652 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:14.465696 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465903 1128964 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:14.471039 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:14.486098 1128964 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:14.486307 1128964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:21:14.486379 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:14.526373 1128964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:21:14.526476 1128964 ssh_runner.go:195] Run: which lz4
	I0318 14:21:14.531145 1128964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:14.536370 1128964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:14.536412 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:21:15.769517 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:17.772721 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:18.769552 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:18.769590 1128788 pod_ready.go:81] duration metric: took 5.009265127s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:18.769610 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:16.505094 1128964 crio.go:444] duration metric: took 1.973996063s to copy over tarball
	I0318 14:21:16.505250 1128964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:19.251009 1128964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.745717226s)
	I0318 14:21:19.251045 1128964 crio.go:451] duration metric: took 2.745895394s to extract the tarball
	I0318 14:21:19.251053 1128964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:19.308392 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:19.363143 1128964 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:21:19.363172 1128964 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:21:19.363181 1128964 kubeadm.go:928] updating node { 192.168.83.39 8444 v1.28.4 crio true true} ...
	I0318 14:21:19.363313 1128964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-075922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:19.363415 1128964 ssh_runner.go:195] Run: crio config
	I0318 14:21:19.415995 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:19.416028 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:19.416048 1128964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:19.416085 1128964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075922 NodeName:default-k8s-diff-port-075922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:21:19.416297 1128964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-075922"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:19.416379 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:21:19.427340 1128964 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:19.427420 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:19.438470 1128964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0318 14:21:19.459945 1128964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:19.479728 1128964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0318 14:21:19.500079 1128964 ssh_runner.go:195] Run: grep 192.168.83.39	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:19.504746 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:19.519931 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:19.654822 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:19.675414 1128964 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922 for IP: 192.168.83.39
	I0318 14:21:19.675443 1128964 certs.go:194] generating shared ca certs ...
	I0318 14:21:19.675462 1128964 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:19.675647 1128964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:19.675707 1128964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:19.675722 1128964 certs.go:256] generating profile certs ...
	I0318 14:21:19.675861 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/client.key
	I0318 14:21:19.683399 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key.675162fd
	I0318 14:21:19.683522 1128964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key
	I0318 14:21:19.683667 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:19.683715 1128964 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:19.683730 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:19.683782 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:19.683870 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:19.683897 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:19.683940 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:19.684679 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:19.743065 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:19.787963 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:19.833491 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:19.865359 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 14:21:19.903294 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:19.932298 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:19.961860 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 14:21:19.992150 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:20.020750 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:20.047780 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:20.074566 1128964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:20.094524 1128964 ssh_runner.go:195] Run: openssl version
	I0318 14:21:20.101181 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:20.118970 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124628 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124707 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.133462 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:20.150447 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:20.165864 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173488 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173627 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.183147 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:20.200417 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:20.213973 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219407 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219488 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.226491 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:20.240299 1128964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:20.245960 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:20.253073 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:20.260144 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:20.267546 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:20.274740 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:20.282502 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:20.289722 1128964 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:20.289817 1128964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:20.289877 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.338941 1128964 cri.go:89] found id: ""
	I0318 14:21:20.339036 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:20.350677 1128964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:20.350706 1128964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:20.350718 1128964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:20.350775 1128964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:20.362216 1128964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:20.363622 1128964 kubeconfig.go:125] found "default-k8s-diff-port-075922" server: "https://192.168.83.39:8444"
	I0318 14:21:20.366606 1128964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:20.379417 1128964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.39
	I0318 14:21:20.379460 1128964 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:20.379481 1128964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:20.379556 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.423139 1128964 cri.go:89] found id: ""
	I0318 14:21:20.423224 1128964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:20.444111 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:20.456698 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:20.456725 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:20.456787 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:21:20.467432 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:20.467502 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:20.478894 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:21:20.490123 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:20.490216 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:20.501744 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.514020 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:20.514084 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.526805 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:21:20.538374 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:20.538452 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:20.550880 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:20.562302 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:20.687288 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.085960 1128788 pod_ready.go:102] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:21.781260 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.781287 1128788 pod_ready.go:81] duration metric: took 3.011668835s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.781297 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789501 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.789537 1128788 pod_ready.go:81] duration metric: took 8.231402ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789552 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797445 1128788 pod_ready.go:92] pod "kube-proxy-mtx9w" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.797483 1128788 pod_ready.go:81] duration metric: took 7.921289ms for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797496 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804084 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.804120 1128788 pod_ready.go:81] duration metric: took 6.613559ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804132 1128788 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:23.812751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:21.432193 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.821781 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.899411 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.984494 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:21.984624 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.484985 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.985119 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:23.009700 1128964 api_server.go:72] duration metric: took 1.025195346s to wait for apiserver process to appear ...
	I0318 14:21:23.009739 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:23.009764 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:23.010328 1128964 api_server.go:269] stopped: https://192.168.83.39:8444/healthz: Get "https://192.168.83.39:8444/healthz": dial tcp 192.168.83.39:8444: connect: connection refused
	I0318 14:21:23.510606 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.307173 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.307217 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.307238 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.345507 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.345551 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.510350 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.515684 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:26.515721 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.010509 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.015492 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:27.015526 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.510772 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.520209 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:21:27.527945 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:27.527978 1128964 api_server.go:131] duration metric: took 4.518232257s to wait for apiserver health ...
	I0318 14:21:27.527988 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:27.527994 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:27.529779 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:26.313296 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:28.811916 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:27.531259 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:27.573198 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:27.619813 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:27.629766 1128964 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:27.629805 1128964 system_pods.go:61] "coredns-5dd5756b68-dsrcd" [86ac331d-2539-4fbb-8cf8-56f58afa6f99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:27.629815 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [0de3bd3b-6ee2-46e2-83f7-7c637115879f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:27.629821 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [e1e689c8-642c-428e-bddf-43c2c1524563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:27.629832 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [1a200d0f-53e6-4e44-a8b0-28b9d21f763e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:27.629837 1128964 system_pods.go:61] "kube-proxy-wbnvd" [6bf13050-a150-4133-93e2-71ddcad443ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:27.629842 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [87bc17b3-75c6-4d6b-9b8f-29823398100a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:27.629847 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-4vrvb" [d12dc531-720c-4a7a-93af-69b9005666fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:27.629852 1128964 system_pods.go:61] "storage-provisioner" [856896cd-daec-4873-8f9c-c7cadeb3c16e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:27.629857 1128964 system_pods.go:74] duration metric: took 10.000416ms to wait for pod list to return data ...
	I0318 14:21:27.629866 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:27.634112 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:27.634147 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:27.634159 1128964 node_conditions.go:105] duration metric: took 4.287491ms to run NodePressure ...
	I0318 14:21:27.634190 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:27.976277 1128964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980894 1128964 kubeadm.go:733] kubelet initialised
	I0318 14:21:27.980920 1128964 kubeadm.go:734] duration metric: took 4.609836ms waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980932 1128964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:27.986151 1128964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:29.993963 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:31.313401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:33.811753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:36.252999 1128583 start.go:364] duration metric: took 57.905644198s to acquireMachinesLock for "no-preload-188109"
	I0318 14:21:36.253054 1128583 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:36.253063 1128583 fix.go:54] fixHost starting: 
	I0318 14:21:36.253510 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:36.253545 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:36.271856 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0318 14:21:36.272254 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:36.272790 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:21:36.272822 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:36.273237 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:36.273446 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:36.273614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:21:36.275414 1128583 fix.go:112] recreateIfNeeded on no-preload-188109: state=Stopped err=<nil>
	I0318 14:21:36.275440 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	W0318 14:21:36.275623 1128583 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:36.277528 1128583 out.go:177] * Restarting existing kvm2 VM for "no-preload-188109" ...
	I0318 14:21:31.995770 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.495078 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:35.812465 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:37.820263 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.313683 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.278973 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Start
	I0318 14:21:36.279160 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring networks are active...
	I0318 14:21:36.280043 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network default is active
	I0318 14:21:36.280495 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network mk-no-preload-188109 is active
	I0318 14:21:36.281014 1128583 main.go:141] libmachine: (no-preload-188109) Getting domain xml...
	I0318 14:21:36.281995 1128583 main.go:141] libmachine: (no-preload-188109) Creating domain...
	I0318 14:21:37.644409 1128583 main.go:141] libmachine: (no-preload-188109) Waiting to get IP...
	I0318 14:21:37.645406 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.645958 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.646047 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.645922 1130597 retry.go:31] will retry after 223.965782ms: waiting for machine to come up
	I0318 14:21:37.871397 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.871933 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.871971 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.871882 1130597 retry.go:31] will retry after 272.743353ms: waiting for machine to come up
	I0318 14:21:38.146680 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.147278 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.147309 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.147211 1130597 retry.go:31] will retry after 414.468616ms: waiting for machine to come up
	I0318 14:21:38.563199 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.563768 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.563794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.563718 1130597 retry.go:31] will retry after 582.588791ms: waiting for machine to come up
	I0318 14:21:39.147611 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.148410 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.148436 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.148315 1130597 retry.go:31] will retry after 686.425224ms: waiting for machine to come up
	I0318 14:21:39.836964 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.837647 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.837677 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.837593 1130597 retry.go:31] will retry after 878.564369ms: waiting for machine to come up
	I0318 14:21:40.717644 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:40.718346 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:40.718380 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:40.718276 1130597 retry.go:31] will retry after 1.183201382s: waiting for machine to come up
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:36.995480 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:39.005382 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.495300 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.495345 1128964 pod_ready.go:81] duration metric: took 12.509166884s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.495358 1128964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504432 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.504467 1128964 pod_ready.go:81] duration metric: took 9.100778ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504480 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515466 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.515506 1128964 pod_ready.go:81] duration metric: took 11.017212ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515519 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525891 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.525929 1128964 pod_ready.go:81] duration metric: took 10.399892ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525943 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534161 1128964 pod_ready.go:92] pod "kube-proxy-wbnvd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.534196 1128964 pod_ready.go:81] duration metric: took 8.245545ms for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534208 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:42.314504 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:44.812532 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:41.902972 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:41.903707 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:41.903736 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:41.903670 1130597 retry.go:31] will retry after 1.282612289s: waiting for machine to come up
	I0318 14:21:43.188745 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:43.189303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:43.189332 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:43.189257 1130597 retry.go:31] will retry after 1.175485401s: waiting for machine to come up
	I0318 14:21:44.366602 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:44.367162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:44.367191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:44.367121 1130597 retry.go:31] will retry after 1.700678954s: waiting for machine to come up
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:41.690971 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:41.691009 1128964 pod_ready.go:81] duration metric: took 1.156786821s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:41.691020 1128964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:44.189110 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.201644 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.813954 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:48.817402 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.069196 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:46.069747 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:46.069797 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:46.069687 1130597 retry.go:31] will retry after 2.354521412s: waiting for machine to come up
	I0318 14:21:48.425714 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:48.426186 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:48.426219 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:48.426147 1130597 retry.go:31] will retry after 2.74319235s: waiting for machine to come up
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.699275 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:50.699690 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.311999 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:53.811066 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.173264 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.173844 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:51.173880 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:51.173784 1130597 retry.go:31] will retry after 4.489599719s: waiting for machine to come up
	I0318 14:21:55.665080 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665639 1128583 main.go:141] libmachine: (no-preload-188109) Found IP for machine: 192.168.61.40
	I0318 14:21:55.665675 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has current primary IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665686 1128583 main.go:141] libmachine: (no-preload-188109) Reserving static IP address...
	I0318 14:21:55.666111 1128583 main.go:141] libmachine: (no-preload-188109) Reserved static IP address: 192.168.61.40
	I0318 14:21:55.666149 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.666164 1128583 main.go:141] libmachine: (no-preload-188109) Waiting for SSH to be available...
	I0318 14:21:55.666191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | skip adding static IP to network mk-no-preload-188109 - found existing host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"}
	I0318 14:21:55.666205 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Getting to WaitForSSH function...
	I0318 14:21:55.668473 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668792 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.668837 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668947 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH client type: external
	I0318 14:21:55.668989 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa (-rw-------)
	I0318 14:21:55.669020 1128583 main.go:141] libmachine: (no-preload-188109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:55.669043 1128583 main.go:141] libmachine: (no-preload-188109) DBG | About to run SSH command:
	I0318 14:21:55.669095 1128583 main.go:141] libmachine: (no-preload-188109) DBG | exit 0
	I0318 14:21:55.796228 1128583 main.go:141] libmachine: (no-preload-188109) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:55.796668 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetConfigRaw
	I0318 14:21:55.797378 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:55.800241 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.800716 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.800771 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.801150 1128583 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:21:55.801416 1128583 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:55.801441 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:55.801690 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.804667 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.700698 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.198658 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.805029 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.805269 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.806759 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.806983 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807220 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807421 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.807623 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.807952 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.807982 1128583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:55.920939 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:55.920993 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921259 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:21:55.921292 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921510 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.924430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.924921 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.924962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.925153 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.925431 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925792 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.926029 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.926301 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.926320 1128583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-188109 && echo "no-preload-188109" | sudo tee /etc/hostname
	I0318 14:21:56.051873 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-188109
	
	I0318 14:21:56.051915 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.055015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055387 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.055422 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055659 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.055887 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056058 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056190 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.056318 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.056508 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.056525 1128583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188109/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:56.178366 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:56.178401 1128583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:56.178443 1128583 buildroot.go:174] setting up certificates
	I0318 14:21:56.178454 1128583 provision.go:84] configureAuth start
	I0318 14:21:56.178465 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:56.178859 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:56.181995 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.182457 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182724 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.185337 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185623 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.185649 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185880 1128583 provision.go:143] copyHostCerts
	I0318 14:21:56.185968 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:56.185983 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:56.186073 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:56.186249 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:56.186264 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:56.186296 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:56.186392 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:56.186406 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:56.186432 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:56.186511 1128583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.no-preload-188109 san=[127.0.0.1 192.168.61.40 localhost minikube no-preload-188109]
	I0318 14:21:56.332196 1128583 provision.go:177] copyRemoteCerts
	I0318 14:21:56.332267 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:56.332295 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.335310 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335604 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.335639 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335787 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.336002 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.336170 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.336310 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.427529 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:56.459132 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:21:56.488690 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:56.516043 1128583 provision.go:87] duration metric: took 337.568576ms to configureAuth
	I0318 14:21:56.516088 1128583 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:56.516309 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:21:56.516457 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.519576 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.519998 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.520059 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.520237 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.520460 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520677 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520876 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.521065 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.521290 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.521307 1128583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:56.831034 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:56.831076 1128583 machine.go:97] duration metric: took 1.029643209s to provisionDockerMachine
	I0318 14:21:56.831092 1128583 start.go:293] postStartSetup for "no-preload-188109" (driver="kvm2")
	I0318 14:21:56.831107 1128583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:56.831126 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:56.831549 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:56.831611 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.834520 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.834962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.834992 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.835234 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.835415 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.835582 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.835743 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.927694 1128583 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:56.932973 1128583 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:56.933002 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:56.933088 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:56.933200 1128583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:56.933345 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:56.943594 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:56.971483 1128583 start.go:296] duration metric: took 140.368525ms for postStartSetup
	I0318 14:21:56.971564 1128583 fix.go:56] duration metric: took 20.718501273s for fixHost
	I0318 14:21:56.971618 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.974721 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975185 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.975250 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975409 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.975679 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.975885 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.976049 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.976242 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.976438 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.976453 1128583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:57.089795 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771717.066528661
	
	I0318 14:21:57.089823 1128583 fix.go:216] guest clock: 1710771717.066528661
	I0318 14:21:57.089834 1128583 fix.go:229] Guest: 2024-03-18 14:21:57.066528661 +0000 UTC Remote: 2024-03-18 14:21:56.971568576 +0000 UTC m=+361.214853207 (delta=94.960085ms)
	I0318 14:21:57.089865 1128583 fix.go:200] guest clock delta is within tolerance: 94.960085ms
	I0318 14:21:57.089873 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 20.836840869s
	I0318 14:21:57.089898 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.090297 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:57.094015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094517 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.094563 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094920 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095607 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095844 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095978 1128583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:57.096034 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.096182 1128583 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:57.096221 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.099303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099329 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099754 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099854 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099869 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.100103 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100118 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100339 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100568 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100578 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100766 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.100781 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.203060 1128583 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:57.209943 1128583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:57.368686 1128583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:57.376289 1128583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:57.376375 1128583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:57.394365 1128583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:57.394405 1128583 start.go:494] detecting cgroup driver to use...
	I0318 14:21:57.394488 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:57.412172 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:57.428895 1128583 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:57.428988 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:57.445064 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:57.461255 1128583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:57.596381 1128583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:57.774782 1128583 docker.go:233] disabling docker service ...
	I0318 14:21:57.774890 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:57.791820 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:57.807412 1128583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:57.961890 1128583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:58.118122 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:58.133994 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:58.155336 1128583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:58.155429 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.167537 1128583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:58.167642 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.180814 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.193997 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.206817 1128583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:58.220843 1128583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:58.232012 1128583 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:58.232073 1128583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:58.246610 1128583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:58.260393 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:58.416723 1128583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:58.588776 1128583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:58.588864 1128583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:58.594689 1128583 start.go:562] Will wait 60s for crictl version
	I0318 14:21:58.594787 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:58.599287 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:58.634954 1128583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:58.635059 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.667031 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.703316 1128583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:21:55.812079 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:57.813027 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.310988 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:58.704763 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:58.708030 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708495 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:58.708527 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708738 1128583 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:58.713408 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:58.726934 1128583 kubeadm.go:877] updating cluster {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:58.727067 1128583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:21:58.727105 1128583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:58.764875 1128583 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:21:58.764904 1128583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:58.764976 1128583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.765019 1128583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.765091 1128583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.765117 1128583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.765142 1128583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.765158 1128583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.765125 1128583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.765098 1128583 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766495 1128583 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766589 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.766592 1128583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.766768 1128583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.766924 1128583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.766492 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.919274 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 14:21:58.934955 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.945887 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.954907 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.961334 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.976485 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.991515 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.100572 1128583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 14:21:59.100624 1128583 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.100684 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.125681 1128583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 14:21:59.125740 1128583 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.125799 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.138461 1128583 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 14:21:59.138521 1128583 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.138579 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149655 1128583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 14:21:59.149697 1128583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.149763 1128583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149803 1128583 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.149831 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.149839 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149790 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149875 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.231815 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.231851 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 14:21:59.231959 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:21:59.232052 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.232060 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.232064 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.232148 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.317997 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 14:21:59.318029 1128583 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318083 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318116 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318158 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318213 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318240 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.318246 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 14:21:59.318252 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318281 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 14:21:59.318315 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.364549 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.703771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.200048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:02.313802 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.812944 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:03.246360 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.928017963s)
	I0318 14:22:03.246414 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246364 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.928251379s)
	I0318 14:22:03.246429 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 14:22:03.246439 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.92820974s)
	I0318 14:22:03.246454 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246468 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246415 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.928141711s)
	I0318 14:22:03.246512 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246515 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246516 1128583 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.88192635s)
	I0318 14:22:03.246587 1128583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 14:22:03.246641 1128583 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:03.246704 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.203222 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.699887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.813730 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.311491 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.317600 1128583 ssh_runner.go:235] Completed: which crictl: (3.070863461s)
	I0318 14:22:06.317700 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:06.317775 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.071235517s)
	I0318 14:22:06.317805 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 14:22:06.317837 1128583 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.317907 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.370328 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 14:22:06.370435 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.243401402s)
	I0318 14:22:08.613903 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.295918452s)
	I0318 14:22:08.613917 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 14:22:08.613941 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:08.613994 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.199978 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.200394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.312752 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:13.812826 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.076840 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462814214s)
	I0318 14:22:11.076881 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 14:22:11.076917 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:11.076968 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:13.332851 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.25584847s)
	I0318 14:22:13.332896 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 14:22:13.332932 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:13.333002 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:14.705785 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.372744893s)
	I0318 14:22:14.705843 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 14:22:14.705881 1128583 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:14.705945 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:15.467380 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 14:22:15.467432 1128583 cache_images.go:123] Successfully loaded all cached images
	I0318 14:22:15.467439 1128583 cache_images.go:92] duration metric: took 16.702522125s to LoadCachedImages
	I0318 14:22:15.467456 1128583 kubeadm.go:928] updating node { 192.168.61.40 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:22:15.467619 1128583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-188109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:22:15.467790 1128583 ssh_runner.go:195] Run: crio config
	I0318 14:22:15.520678 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:15.520705 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:15.520718 1128583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:22:15.520740 1128583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.40 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188109 NodeName:no-preload-188109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:22:15.520893 1128583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188109"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:22:15.520965 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:22:15.534187 1128583 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:22:15.534260 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:22:15.546509 1128583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 14:22:15.567029 1128583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:22:15.586866 1128583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 14:22:15.609161 1128583 ssh_runner.go:195] Run: grep 192.168.61.40	control-plane.minikube.internal$ /etc/hosts
	I0318 14:22:15.614800 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:22:15.630088 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:22:15.754729 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:22:15.774062 1128583 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109 for IP: 192.168.61.40
	I0318 14:22:15.774093 1128583 certs.go:194] generating shared ca certs ...
	I0318 14:22:15.774114 1128583 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:22:15.774374 1128583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:22:15.774434 1128583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:22:15.774448 1128583 certs.go:256] generating profile certs ...
	I0318 14:22:15.774537 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/client.key
	I0318 14:22:15.774607 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key.8d4024a9
	I0318 14:22:15.774652 1128583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key
	I0318 14:22:15.774833 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:22:15.774871 1128583 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:22:15.774882 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:22:15.774926 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:22:15.774972 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:22:15.775031 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:22:15.775106 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:22:15.775902 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.698561 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:14.199200 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.200026 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.312392 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:18.812463 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:15.821418 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:22:15.874044 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:22:15.910814 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:22:15.965889 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:22:16.001003 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:22:16.030033 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:22:16.060519 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:22:16.089952 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:22:16.119397 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:22:16.150036 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:22:16.179489 1128583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:22:16.201823 1128583 ssh_runner.go:195] Run: openssl version
	I0318 14:22:16.208496 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:22:16.222723 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228161 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228239 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.234994 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:22:16.248672 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:22:16.262626 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268255 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268361 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.274868 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:22:16.287251 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:22:16.299690 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304633 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304718 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.311230 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:22:16.325483 1128583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:22:16.331012 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:22:16.338731 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:22:16.346289 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:22:16.353403 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:22:16.359967 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:22:16.367151 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:22:16.373719 1128583 kubeadm.go:391] StartCluster: {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:22:16.373823 1128583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:22:16.373921 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.417874 1128583 cri.go:89] found id: ""
	I0318 14:22:16.417957 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:22:16.431026 1128583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:22:16.431057 1128583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:22:16.431065 1128583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:22:16.431125 1128583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:22:16.445445 1128583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:22:16.446576 1128583 kubeconfig.go:125] found "no-preload-188109" server: "https://192.168.61.40:8443"
	I0318 14:22:16.449104 1128583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:22:16.461001 1128583 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.40
	I0318 14:22:16.461042 1128583 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:22:16.461056 1128583 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:22:16.461104 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.502356 1128583 cri.go:89] found id: ""
	I0318 14:22:16.502437 1128583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:22:16.525636 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:22:16.538600 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:22:16.538626 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:22:16.538677 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:22:16.550720 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:22:16.550803 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:22:16.562585 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:22:16.573439 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:22:16.573502 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:22:16.585548 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.596619 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:22:16.596706 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.608458 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:22:16.619498 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:22:16.619587 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:22:16.631359 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:22:16.643420 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:16.765437 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:17.862932 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097434993s)
	I0318 14:22:17.862980 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.097197 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.168390 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.295118 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:22:18.295225 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.795897 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.295431 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.335088 1128583 api_server.go:72] duration metric: took 1.039967082s to wait for apiserver process to appear ...
	I0318 14:22:19.335128 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:22:19.335163 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:19.335912 1128583 api_server.go:269] stopped: https://192.168.61.40:8443/healthz: Get "https://192.168.61.40:8443/healthz": dial tcp 192.168.61.40:8443: connect: connection refused
	I0318 14:22:19.836266 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.699537 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:21.199910 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:22.338349 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.338383 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.338402 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.351154 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.351190 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.835446 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.841044 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:22.841092 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.335665 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.347092 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.347126 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.835731 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.840517 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.840559 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:24.336151 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:24.340981 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:22:24.354524 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:22:24.354560 1128583 api_server.go:131] duration metric: took 5.019424083s to wait for apiserver health ...
	I0318 14:22:24.354570 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:24.354576 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:24.356602 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:22:20.818751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:23.312003 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:24.358089 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:22:24.375159 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:22:24.426409 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:22:24.452289 1128583 system_pods.go:59] 8 kube-system pods found
	I0318 14:22:24.452326 1128583 system_pods.go:61] "coredns-76f75df574-cksb5" [9cd14e15-7b0f-4978-b667-cba1a54db074] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:22:24.452333 1128583 system_pods.go:61] "etcd-no-preload-188109" [fa7d3ae7-2ac1-4275-8739-686c2e3b7569] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:22:24.452345 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [135ee544-ca83-41ab-9cb2-070587eb3b77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:22:24.452351 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [fd91846b-6210-4cab-ae0f-5e942b4f596e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:22:24.452361 1128583 system_pods.go:61] "kube-proxy-k5kcr" [a1649d3a-9063-49c3-a8a5-04879eee108b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:22:24.452367 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [5bbb4165-ca8f-4807-ad01-bb35c56b6aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:22:24.452375 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-6pn6n" [004af8d8-fa8c-475c-9604-ed98ccceb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:22:24.452390 1128583 system_pods.go:61] "storage-provisioner" [45cae6ca-e3ad-4f7e-9d10-96e091160e4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:22:24.452404 1128583 system_pods.go:74] duration metric: took 25.960889ms to wait for pod list to return data ...
	I0318 14:22:24.452417 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:22:24.456337 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:22:24.456367 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:22:24.456404 1128583 node_conditions.go:105] duration metric: took 3.980296ms to run NodePressure ...
	I0318 14:22:24.456424 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:24.738808 1128583 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743864 1128583 kubeadm.go:733] kubelet initialised
	I0318 14:22:24.743893 1128583 kubeadm.go:734] duration metric: took 5.054661ms waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743905 1128583 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:22:24.749832 1128583 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.700193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.198261 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:25.810553 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:27.811576 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.310813 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.757033 1128583 pod_ready.go:102] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:28.757522 1128583 pod_ready.go:92] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:28.757562 1128583 pod_ready.go:81] duration metric: took 4.007696709s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:28.757576 1128583 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:30.767877 1128583 pod_ready.go:102] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.199688 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.199994 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:32.311356 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.311807 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.265717 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:31.265745 1128583 pod_ready.go:81] duration metric: took 2.508162139s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:31.265755 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:33.273718 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:35.275477 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.200137 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.698589 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:36.812617 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.312289 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:37.774164 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.273935 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.273990 1128583 pod_ready.go:81] duration metric: took 8.008204942s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.274005 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280284 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.280313 1128583 pod_ready.go:81] duration metric: took 6.300519ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280324 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286027 1128583 pod_ready.go:92] pod "kube-proxy-k5kcr" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.286052 1128583 pod_ready.go:81] duration metric: took 5.721757ms for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286061 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292404 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.292450 1128583 pod_ready.go:81] duration metric: took 6.381121ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292462 1128583 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.699760 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.198691 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.199259 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.812494 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:44.312890 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.300167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:43.803022 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.199771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:45.698884 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.811124 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.827580 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.300768 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.300891 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.800442 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:47.699312 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.199049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:51.312163 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.812711 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:52.800573 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:54.801034 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:52.698863 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:55.200175 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.311249 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:58.312362 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:57.300207 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.300788 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:57.698375 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.706517 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:00.814865 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:03.312810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:01.800156 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.302577 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:02.198461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.199002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.199307 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:05.811934 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:08.312176 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:10.312721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.800938 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:09.302833 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:08.699862 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.199072 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.315288 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.813053 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.800023 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.300632 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:13.199539 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:15.700968 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:17.311266 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:19.311540 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:16.800323 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.801497 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:18.198887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:20.698596 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.312215 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.810513 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.299039 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.300143 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.798699 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.699705 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.200662 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.811810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.813013 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.311457 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.800554 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.304272 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:27.700002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.200376 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.311860 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.313084 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.802042 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:35.299530 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:32.212811 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.700061 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:36.811811 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.312768 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.300434 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.300586 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.199672 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.698826 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.810759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.812721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.800736 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:41.707557 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.199509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.311703 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.811077 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.301804 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.800280 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.801802 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.698786 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:49.198083 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:51.198509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.812029 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.311681 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.300917 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.301653 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.699341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.699481 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.810495 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.810917 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:59.812265 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.800712 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.300089 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:58.198656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.699394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.311601 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.311669 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.300584 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.300628 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:03.198564 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:05.198935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.811819 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.812462 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.300681 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.300912 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.799297 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:07.699433 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.198062 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.311072 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:13.311533 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:15.313064 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:12.799619 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.800637 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:12.201234 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.698988 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.811663 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.812068 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:16.801294 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.301959 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.199048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.200040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:22.312401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.812678 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.799684 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.301211 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.699674 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.199382 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.312672 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.315087 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:26.800594 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.299763 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:26.698769 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:28.700212 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.200571 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.811482 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.812980 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.299818 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.300656 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.798808 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:33.698578 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.698656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.814759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:38.311080 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:40.312503 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:37.800024 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.801292 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.698736 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.699014 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.813028 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.814259 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.300121 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.802581 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:42.198235 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.199088 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.312598 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.814542 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.300765 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.801469 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.699746 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.198789 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:51.199313 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.311753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.311952 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.301766 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.199935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:55.699461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.811981 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.814200 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.799464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.800623 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.198049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:00.199613 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.312698 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.811687 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.299462 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.300239 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:05.301621 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.200341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:04.698040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:06.312239 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.811427 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:07.800597 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:10.300964 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:06.700315 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:09.198241 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.198296 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.312458 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:13.312602 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:12.799474 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:14.800216 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:13.199314 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.199666 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.812120 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.813219 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.814195 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:16.800432 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.299589 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.698783 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.698980 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.804733 1128788 pod_ready.go:81] duration metric: took 4m0.000568505s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:21.804764 1128788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:21.804783 1128788 pod_ready.go:38] duration metric: took 4m13.068724908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:21.804834 1128788 kubeadm.go:591] duration metric: took 4m21.284795634s to restartPrimaryControlPlane
	W0318 14:25:21.804919 1128788 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:21.804954 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:21.300889 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:23.800547 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:25.803188 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:21.699436 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:24.199193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:28.300495 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:30.300870 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:26.700384 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:29.203012 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:32.800506 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:35.299464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:31.698372 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.699450 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.199443 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:37.299695 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:39.800741 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:38.199879 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:40.698620 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:41.801098 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:44.301379 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:41.692077 1128964 pod_ready.go:81] duration metric: took 4m0.00104s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:41.692109 1128964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:41.692136 1128964 pod_ready.go:38] duration metric: took 4m13.711186182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:41.692170 1128964 kubeadm.go:591] duration metric: took 4m21.341445822s to restartPrimaryControlPlane
	W0318 14:25:41.692279 1128964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:41.692345 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:46.800687 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:49.300012 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:54.173048 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.368065771s)
	I0318 14:25:54.173145 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:54.192139 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:54.204909 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:54.217096 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:54.217126 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:54.217182 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:54.227905 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:54.228009 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:54.239854 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:54.250668 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:54.250744 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:54.263509 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.274202 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:54.274265 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.285342 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:54.296064 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:54.296157 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:54.307985 1128788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:54.371118 1128788 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:25:54.371202 1128788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:54.551187 1128788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:54.551377 1128788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:54.551551 1128788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:54.780034 1128788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:54.782426 1128788 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:54.782545 1128788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:54.782650 1128788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:54.782735 1128788 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:54.782829 1128788 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:54.782930 1128788 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:54.783213 1128788 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:54.783717 1128788 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:54.784390 1128788 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:54.784849 1128788 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:54.785263 1128788 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:54.785725 1128788 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:54.785826 1128788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:55.130998 1128788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:55.387076 1128788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:55.517240 1128788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:51.300209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:53.303010 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.800703 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.906565 1128788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:55.907198 1128788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:55.909674 1128788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:55.912196 1128788 out.go:204]   - Booting up control plane ...
	I0318 14:25:55.912323 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:55.912407 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:55.912494 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:55.932596 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:55.935171 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:55.935520 1128788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:56.083395 1128788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:58.300288 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:00.800291 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:02.086878 1128788 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002842 seconds
	I0318 14:26:02.087052 1128788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:02.102499 1128788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:02.637889 1128788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:02.638152 1128788 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-767719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:03.157386 1128788 kubeadm.go:309] [bootstrap-token] Using token: do2whq.efhsaljmpmqgv9gj
	I0318 14:26:03.159248 1128788 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:03.159429 1128788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:03.167328 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:03.180628 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:03.185253 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:03.190014 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:03.202714 1128788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:03.223282 1128788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:03.504303 1128788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:03.614837 1128788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:03.614872 1128788 kubeadm.go:309] 
	I0318 14:26:03.614978 1128788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:03.615004 1128788 kubeadm.go:309] 
	I0318 14:26:03.615107 1128788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:03.615117 1128788 kubeadm.go:309] 
	I0318 14:26:03.615149 1128788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:03.615219 1128788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:03.615285 1128788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:03.615293 1128788 kubeadm.go:309] 
	I0318 14:26:03.615354 1128788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:03.615365 1128788 kubeadm.go:309] 
	I0318 14:26:03.615421 1128788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:03.615430 1128788 kubeadm.go:309] 
	I0318 14:26:03.615486 1128788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:03.615578 1128788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:03.615669 1128788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:03.615679 1128788 kubeadm.go:309] 
	I0318 14:26:03.615778 1128788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:03.615887 1128788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:03.615897 1128788 kubeadm.go:309] 
	I0318 14:26:03.615998 1128788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616120 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:03.616149 1128788 kubeadm.go:309] 	--control-plane 
	I0318 14:26:03.616159 1128788 kubeadm.go:309] 
	I0318 14:26:03.616266 1128788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:03.616276 1128788 kubeadm.go:309] 
	I0318 14:26:03.616371 1128788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616500 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:03.617330 1128788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:03.617374 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:26:03.617384 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:03.619394 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:03.620836 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:03.665582 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:03.812834 1128788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:03.812897 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:03.812943 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-767719 minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=embed-certs-767719 minikube.k8s.io/primary=true
	I0318 14:26:03.899419 1128788 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:04.104407 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:04.604499 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.104532 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.605047 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:02.800707 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:04.802167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:06.105187 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:06.604462 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.104411 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.605096 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.104448 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.604430 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.104707 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.605130 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.104955 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.605165 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.300575 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:09.798776 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:11.104436 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.605273 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.104851 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.604819 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.104669 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.605089 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.105486 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.604568 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.104455 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.604422 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.799935 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:13.800907 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:15.801754 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:16.105107 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:16.205506 1128788 kubeadm.go:1107] duration metric: took 12.39266353s to wait for elevateKubeSystemPrivileges
	W0318 14:26:16.205558 1128788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:16.205570 1128788 kubeadm.go:393] duration metric: took 5m15.738081871s to StartCluster
	I0318 14:26:16.205599 1128788 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.205720 1128788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:16.208645 1128788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.209157 1128788 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:16.210915 1128788 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:16.209206 1128788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:16.209401 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:16.212258 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:16.212275 1128788 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767719"
	I0318 14:26:16.212351 1128788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767719"
	I0318 14:26:16.212260 1128788 addons.go:69] Setting metrics-server=true in profile "embed-certs-767719"
	I0318 14:26:16.212415 1128788 addons.go:234] Setting addon metrics-server=true in "embed-certs-767719"
	W0318 14:26:16.212431 1128788 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:16.212469 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212260 1128788 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767719"
	I0318 14:26:16.212512 1128788 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767719"
	W0318 14:26:16.212527 1128788 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:16.212560 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.212983 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213003 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213028 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.213040 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.231532 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0318 14:26:16.231543 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0318 14:26:16.232128 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0318 14:26:16.232280 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232284 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232882 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.232907 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.232922 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.233258 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233284 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.233360 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.233479 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233501 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.235956 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236151 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236372 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236411 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.236545 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236568 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.240163 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.244336 1128788 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767719"
	W0318 14:26:16.244370 1128788 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:16.244407 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.244845 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.244894 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.257940 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0318 14:26:16.258701 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.259359 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.259386 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.259769 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.260030 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.262272 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.262286 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0318 14:26:16.264459 1128788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:16.262834 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.265430 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0318 14:26:16.266198 1128788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.266220 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:16.266240 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.266482 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.266663 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.266676 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267253 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.267277 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267753 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.268456 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.268605 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.269068 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.269098 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.269804 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270398 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.270420 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270711 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.270989 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.271183 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.271362 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.271984 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.273854 1128788 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:14.305258 1128964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.612890386s)
	I0318 14:26:14.305324 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:14.325572 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:26:14.337875 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:26:14.350490 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:26:14.350530 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:26:14.350592 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:26:14.361521 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:26:14.361612 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:26:14.372767 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:26:14.383545 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:26:14.383614 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:26:14.394057 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.404187 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:26:14.404261 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.415029 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:26:14.425738 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:26:14.425820 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:26:14.436847 1128964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:26:14.674909 1128964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:16.275278 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:16.275298 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:16.275323 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.278500 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.278909 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.278939 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.279230 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.279437 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.279612 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.279748 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.286716 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0318 14:26:16.287176 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.287651 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.287678 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.288057 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.288248 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.290084 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.290359 1128788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.290381 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:16.290404 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.293253 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293662 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.293688 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293886 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.294078 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.294241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.294398 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.460832 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:16.537089 1128788 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550362 1128788 node_ready.go:49] node "embed-certs-767719" has status "Ready":"True"
	I0318 14:26:16.550391 1128788 node_ready.go:38] duration metric: took 13.195546ms for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550405 1128788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:16.557745 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:16.638531 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:16.638565 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:16.664638 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.762661 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:16.762713 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:16.792712 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.859169 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:16.859200 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:16.954827 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:18.103559 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.103592 1128788 pod_ready.go:81] duration metric: took 1.545818643s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.103606 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.256039 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591350359s)
	I0318 14:26:18.256112 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256129 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256483 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256513 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256530 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256528 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.256541 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256918 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256936 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256950 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.264761 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.264788 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.265133 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.265164 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.265193 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.652953 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.653088 1128788 pod_ready.go:81] duration metric: took 549.466665ms for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.653124 1128788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674506 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.674553 1128788 pod_ready.go:81] duration metric: took 21.386005ms for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674568 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.680422 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.887663901s)
	I0318 14:26:18.680486 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680498 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.680875 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.680887 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.680903 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.680921 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680928 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.681198 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.681199 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.681277 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.711919 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.711954 1128788 pod_ready.go:81] duration metric: took 37.376915ms for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.711968 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730096 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.730129 1128788 pod_ready.go:81] duration metric: took 18.151839ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730145 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.756000 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.801120989s)
	I0318 14:26:18.756076 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756091 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756416 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756435 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756445 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756452 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756849 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756883 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756895 1128788 addons.go:470] Verifying addon metrics-server=true in "embed-certs-767719"
	I0318 14:26:18.756917 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.759019 1128788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 14:26:18.760442 1128788 addons.go:505] duration metric: took 2.551236037s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 14:26:18.942164 1128788 pod_ready.go:92] pod "kube-proxy-f4547" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.942196 1128788 pod_ready.go:81] duration metric: took 212.040337ms for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.942205 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341772 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:19.341808 1128788 pod_ready.go:81] duration metric: took 399.594033ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341820 1128788 pod_ready.go:38] duration metric: took 2.791403027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:19.341841 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:19.341921 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:19.362110 1128788 api_server.go:72] duration metric: took 3.152894755s to wait for apiserver process to appear ...
	I0318 14:26:19.362150 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:19.362209 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:26:19.368138 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:26:19.369583 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:19.369608 1128788 api_server.go:131] duration metric: took 7.450993ms to wait for apiserver health ...
	I0318 14:26:19.369617 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:19.545388 1128788 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:19.545423 1128788 system_pods.go:61] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.545428 1128788 system_pods.go:61] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.545431 1128788 system_pods.go:61] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.545434 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.545438 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.545441 1128788 system_pods.go:61] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.545443 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.545449 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.545455 1128788 system_pods.go:61] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.545464 1128788 system_pods.go:74] duration metric: took 175.840386ms to wait for pod list to return data ...
	I0318 14:26:19.545473 1128788 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:19.741364 1128788 default_sa.go:45] found service account: "default"
	I0318 14:26:19.741405 1128788 default_sa.go:55] duration metric: took 195.920075ms for default service account to be created ...
	I0318 14:26:19.741424 1128788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:19.945000 1128788 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:19.945039 1128788 system_pods.go:89] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.945047 1128788 system_pods.go:89] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.945053 1128788 system_pods.go:89] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.945060 1128788 system_pods.go:89] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.945066 1128788 system_pods.go:89] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.945070 1128788 system_pods.go:89] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.945076 1128788 system_pods.go:89] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.945087 1128788 system_pods.go:89] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.945097 1128788 system_pods.go:89] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.945110 1128788 system_pods.go:126] duration metric: took 203.67742ms to wait for k8s-apps to be running ...
	I0318 14:26:19.945122 1128788 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:19.945188 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:19.987286 1128788 system_svc.go:56] duration metric: took 42.149434ms WaitForService to wait for kubelet
	I0318 14:26:19.987328 1128788 kubeadm.go:576] duration metric: took 3.778120092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:19.987361 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:20.141763 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:20.141803 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:20.141822 1128788 node_conditions.go:105] duration metric: took 154.45408ms to run NodePressure ...
	I0318 14:26:20.141840 1128788 start.go:240] waiting for startup goroutines ...
	I0318 14:26:20.141851 1128788 start.go:245] waiting for cluster config update ...
	I0318 14:26:20.141867 1128788 start.go:254] writing updated cluster config ...
	I0318 14:26:20.142268 1128788 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:20.206832 1128788 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:20.209057 1128788 out.go:177] * Done! kubectl is now configured to use "embed-certs-767719" cluster and "default" namespace by default
	I0318 14:26:18.302228 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:20.799704 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.444912 1128964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:26:23.444993 1128964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:26:23.445098 1128964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:26:23.445212 1128964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:26:23.445359 1128964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:26:23.445461 1128964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:26:23.446790 1128964 out.go:204]   - Generating certificates and keys ...
	I0318 14:26:23.446904 1128964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:26:23.446986 1128964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:26:23.447102 1128964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:26:23.447194 1128964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:26:23.447309 1128964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:26:23.447376 1128964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:26:23.447453 1128964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:26:23.447529 1128964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:26:23.447607 1128964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:26:23.447693 1128964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:26:23.447741 1128964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:26:23.447856 1128964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:26:23.447937 1128964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:26:23.448019 1128964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:26:23.448121 1128964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:26:23.448194 1128964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:26:23.448311 1128964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:26:23.448422 1128964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:26:23.450038 1128964 out.go:204]   - Booting up control plane ...
	I0318 14:26:23.450174 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:26:23.450282 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:26:23.450371 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:26:23.450509 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:26:23.450633 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:26:23.450671 1128964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:26:23.450818 1128964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:23.450887 1128964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.005932 seconds
	I0318 14:26:23.450974 1128964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:23.451093 1128964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:23.451143 1128964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:23.451340 1128964 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-075922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:23.451414 1128964 kubeadm.go:309] [bootstrap-token] Using token: k51w96.h8xduusjdfbez3gf
	I0318 14:26:23.452848 1128964 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:23.452964 1128964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:23.453073 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:23.453269 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:23.453499 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:23.453664 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:23.453785 1128964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:23.453940 1128964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:23.454005 1128964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:23.454074 1128964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:23.454084 1128964 kubeadm.go:309] 
	I0318 14:26:23.454172 1128964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:23.454186 1128964 kubeadm.go:309] 
	I0318 14:26:23.454288 1128964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:23.454298 1128964 kubeadm.go:309] 
	I0318 14:26:23.454335 1128964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:23.454412 1128964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:23.454475 1128964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:23.454484 1128964 kubeadm.go:309] 
	I0318 14:26:23.454528 1128964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:23.454538 1128964 kubeadm.go:309] 
	I0318 14:26:23.454592 1128964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:23.454599 1128964 kubeadm.go:309] 
	I0318 14:26:23.454681 1128964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:23.454804 1128964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:23.454907 1128964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:23.454919 1128964 kubeadm.go:309] 
	I0318 14:26:23.455027 1128964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:23.455146 1128964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:23.455157 1128964 kubeadm.go:309] 
	I0318 14:26:23.455264 1128964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455401 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:23.455433 1128964 kubeadm.go:309] 	--control-plane 
	I0318 14:26:23.455441 1128964 kubeadm.go:309] 
	I0318 14:26:23.455551 1128964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:23.455560 1128964 kubeadm.go:309] 
	I0318 14:26:23.455666 1128964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455814 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:23.455838 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:26:23.455849 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:23.457678 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:22.801209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:25.305096 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.459285 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:23.475803 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:23.515652 1128964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-075922 minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=default-k8s-diff-port-075922 minikube.k8s.io/primary=true
	I0318 14:26:23.796828 1128964 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:23.796947 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.296970 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.797728 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.297564 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.797144 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:26.297056 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.800960 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:29.802967 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:26.798004 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.297935 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.797550 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.297031 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.797624 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.297549 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.797256 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.297964 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.797927 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:31.297742 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.300787 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:34.800941 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:31.797040 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.297155 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.797371 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.297809 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.797723 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.297045 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.797008 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.297030 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.797767 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.895914 1128964 kubeadm.go:1107] duration metric: took 12.380212538s to wait for elevateKubeSystemPrivileges
	W0318 14:26:35.895975 1128964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:35.895987 1128964 kubeadm.go:393] duration metric: took 5m15.606276512s to StartCluster
	I0318 14:26:35.896013 1128964 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.896123 1128964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:35.898023 1128964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.898324 1128964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:35.900235 1128964 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:35.898415 1128964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:35.898550 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:35.901588 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:35.901599 1128964 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901617 1128964 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901640 1128964 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901650 1128964 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:35.901665 1128964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-075922"
	I0318 14:26:35.901588 1128964 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901698 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.901723 1128964 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901735 1128964 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:35.901764 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.902055 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902088 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902097 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902126 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902130 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902169 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.919538 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0318 14:26:35.920140 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.920836 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.920864 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.921282 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.921940 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.921983 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.923313 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0318 14:26:35.923321 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0318 14:26:35.923742 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.923792 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.924263 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924280 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924381 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924395 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924710 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924733 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924893 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.925215 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.925235 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.928021 1128964 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.928047 1128964 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:35.928081 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.928422 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.928449 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.941908 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0318 14:26:35.942465 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.943114 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.943146 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.943757 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.943991 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.944493 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 14:26:35.944874 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.945387 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.945404 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.945865 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.945988 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.948302 1128964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:35.946821 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.947744 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 14:26:35.950087 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:35.950110 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:35.950135 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.950181 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.950664 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.951258 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.951295 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.951755 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.952146 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.953842 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954331 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.954353 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954360 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.954563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.956253 1128964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:35.954739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:35.956487 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.957743 1128964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:35.957764 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:35.957783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.957864 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.960451 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.960896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.960929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.961107 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.961281 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.961435 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.961565 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.968795 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0318 14:26:35.969191 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.969631 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.969646 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.969955 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.970117 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.971799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.972169 1128964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:35.972188 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:35.972206 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.974906 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975268 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.975301 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975551 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.975767 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.975958 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.976137 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:36.122420 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:36.139655 1128964 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160857 1128964 node_ready.go:49] node "default-k8s-diff-port-075922" has status "Ready":"True"
	I0318 14:26:36.160883 1128964 node_ready.go:38] duration metric: took 21.193343ms for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160893 1128964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:36.176832 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:36.240357 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:36.240385 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:36.261620 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:36.279644 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:36.294510 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:36.294546 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:36.374231 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:36.376166 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:36.419045 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:38.032072 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752379015s)
	I0318 14:26:38.032148 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032161 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032374 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770714521s)
	I0318 14:26:38.032416 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032427 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032623 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032652 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032660 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032683 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032796 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032814 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032817 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032835 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032848 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.033046 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033107 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033173 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.033149 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033259 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033284 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.112866 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.112896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.113337 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.113362 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.113384 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176199 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.757085355s)
	I0318 14:26:38.176281 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176302 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176669 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176683 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176697 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176707 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176716 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176955 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176969 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176980 1128964 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-075922"
	I0318 14:26:38.178714 1128964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:26:37.300219 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:39.293136 1128583 pod_ready.go:81] duration metric: took 4m0.000606722s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	E0318 14:26:39.293173 1128583 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:26:39.293203 1128583 pod_ready.go:38] duration metric: took 4m14.549283732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:39.293239 1128583 kubeadm.go:591] duration metric: took 4m22.862167815s to restartPrimaryControlPlane
	W0318 14:26:39.293320 1128583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:26:39.293362 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:38.180451 1128964 addons.go:505] duration metric: took 2.282033093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:26:38.194239 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:40.186091 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.186125 1128964 pod_ready.go:81] duration metric: took 4.009253844s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.186139 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193026 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.193059 1128964 pod_ready.go:81] duration metric: took 6.912513ms for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193069 1128964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199244 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.199272 1128964 pod_ready.go:81] duration metric: took 6.195834ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199283 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.204991 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.205019 1128964 pod_ready.go:81] duration metric: took 5.728459ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.205034 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214706 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.214730 1128964 pod_ready.go:81] duration metric: took 9.687528ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214739 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.581970 1128964 pod_ready.go:92] pod "kube-proxy-bzwvf" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.582045 1128964 pod_ready.go:81] duration metric: took 367.297496ms for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.582059 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981562 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.981592 1128964 pod_ready.go:81] duration metric: took 399.525488ms for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981601 1128964 pod_ready.go:38] duration metric: took 4.820697544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:40.981618 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:40.981676 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:40.998626 1128964 api_server.go:72] duration metric: took 5.100242538s to wait for apiserver process to appear ...
	I0318 14:26:40.998672 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:40.998703 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:26:41.010986 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:26:41.012714 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:41.012742 1128964 api_server.go:131] duration metric: took 14.061953ms to wait for apiserver health ...
	I0318 14:26:41.012750 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:41.186873 1128964 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:41.186910 1128964 system_pods.go:61] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.186917 1128964 system_pods.go:61] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.186922 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.186935 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.186943 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.186948 1128964 system_pods.go:61] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.186953 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.187013 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.187029 1128964 system_pods.go:61] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.187041 1128964 system_pods.go:74] duration metric: took 174.283401ms to wait for pod list to return data ...
	I0318 14:26:41.187054 1128964 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:41.381195 1128964 default_sa.go:45] found service account: "default"
	I0318 14:26:41.381238 1128964 default_sa.go:55] duration metric: took 194.17219ms for default service account to be created ...
	I0318 14:26:41.381252 1128964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:41.584896 1128964 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:41.584934 1128964 system_pods.go:89] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.584940 1128964 system_pods.go:89] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.584945 1128964 system_pods.go:89] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.584952 1128964 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.584957 1128964 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.584961 1128964 system_pods.go:89] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.584965 1128964 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.584974 1128964 system_pods.go:89] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.584980 1128964 system_pods.go:89] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.584996 1128964 system_pods.go:126] duration metric: took 203.730421ms to wait for k8s-apps to be running ...
	I0318 14:26:41.585011 1128964 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:41.585065 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:41.602211 1128964 system_svc.go:56] duration metric: took 17.185915ms WaitForService to wait for kubelet
	I0318 14:26:41.602253 1128964 kubeadm.go:576] duration metric: took 5.703881545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:41.602283 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:41.781292 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:41.781321 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:41.781333 1128964 node_conditions.go:105] duration metric: took 179.044515ms to run NodePressure ...
	I0318 14:26:41.781345 1128964 start.go:240] waiting for startup goroutines ...
	I0318 14:26:41.781352 1128964 start.go:245] waiting for cluster config update ...
	I0318 14:26:41.781363 1128964 start.go:254] writing updated cluster config ...
	I0318 14:26:41.781670 1128964 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:41.845950 1128964 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:41.848522 1128964 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-075922" cluster and "default" namespace by default
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:11.668940 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.375539998s)
	I0318 14:27:11.669036 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:11.687767 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:27:11.699135 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:11.710896 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:11.710924 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:11.710971 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:11.721562 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:11.721638 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:11.733335 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:11.744643 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:11.744724 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:11.755801 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.766424 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:11.766515 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.777734 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:11.788887 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:11.788972 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:11.800792 1128583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:11.858933 1128583 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:27:11.859030 1128583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:27:12.029485 1128583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:27:12.029703 1128583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:27:12.029833 1128583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:27:12.279174 1128583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:27:12.281285 1128583 out.go:204]   - Generating certificates and keys ...
	I0318 14:27:12.281400 1128583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:27:12.281507 1128583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:27:12.281633 1128583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:27:12.281726 1128583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:27:12.281844 1128583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:27:12.281938 1128583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:27:12.282031 1128583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:27:12.282121 1128583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:27:12.282218 1128583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:27:12.282325 1128583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:27:12.282392 1128583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:27:12.282470 1128583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:27:12.605106 1128583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:27:12.950706 1128583 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:27:13.067948 1128583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:27:13.340677 1128583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:27:13.393147 1128583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:27:13.393891 1128583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:27:13.396474 1128583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:27:13.398563 1128583 out.go:204]   - Booting up control plane ...
	I0318 14:27:13.398698 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:27:13.398814 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:27:13.398900 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:27:13.422155 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:27:13.423529 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:27:13.423626 1128583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:27:13.568295 1128583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:27:19.571958 1128583 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003509 seconds
	I0318 14:27:19.587644 1128583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:27:19.607417 1128583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:27:20.153253 1128583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:27:20.153526 1128583 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-188109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:27:20.671613 1128583 kubeadm.go:309] [bootstrap-token] Using token: oq5d1l.24j9td8ex727h998
	I0318 14:27:20.673250 1128583 out.go:204]   - Configuring RBAC rules ...
	I0318 14:27:20.673402 1128583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:27:20.680765 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:27:20.693884 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:27:20.698696 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:27:20.702572 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:27:20.710027 1128583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:27:20.725068 1128583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:27:20.981178 1128583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:27:21.104335 1128583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:27:21.107428 1128583 kubeadm.go:309] 
	I0318 14:27:21.107550 1128583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:27:21.107596 1128583 kubeadm.go:309] 
	I0318 14:27:21.107725 1128583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:27:21.107750 1128583 kubeadm.go:309] 
	I0318 14:27:21.107796 1128583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:27:21.107894 1128583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:27:21.107995 1128583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:27:21.108030 1128583 kubeadm.go:309] 
	I0318 14:27:21.108127 1128583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:27:21.108145 1128583 kubeadm.go:309] 
	I0318 14:27:21.108228 1128583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:27:21.108242 1128583 kubeadm.go:309] 
	I0318 14:27:21.108318 1128583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:27:21.108400 1128583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:27:21.108487 1128583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:27:21.108503 1128583 kubeadm.go:309] 
	I0318 14:27:21.108628 1128583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:27:21.108730 1128583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:27:21.108741 1128583 kubeadm.go:309] 
	I0318 14:27:21.108839 1128583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.108968 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:27:21.109031 1128583 kubeadm.go:309] 	--control-plane 
	I0318 14:27:21.109054 1128583 kubeadm.go:309] 
	I0318 14:27:21.109176 1128583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:27:21.109195 1128583 kubeadm.go:309] 
	I0318 14:27:21.109298 1128583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.109455 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:27:21.114992 1128583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:21.115128 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:27:21.115151 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:27:21.116940 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:27:21.118320 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:27:21.167945 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:27:21.256429 1128583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-188109 minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=no-preload-188109 minikube.k8s.io/primary=true
	I0318 14:27:21.315419 1128583 ops.go:34] apiserver oom_adj: -16
	I0318 14:27:21.530472 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.030814 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.531214 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.030869 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.530677 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.031137 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.531400 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.031455 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.530648 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.031501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.531399 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.031109 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.531261 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.030757 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.531295 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.030505 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.531501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.030996 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.530490 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.030520 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.531340 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.031217 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.531425 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.031231 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.531300 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.678904 1128583 kubeadm.go:1107] duration metric: took 12.422463336s to wait for elevateKubeSystemPrivileges
	W0318 14:27:33.678959 1128583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:27:33.678972 1128583 kubeadm.go:393] duration metric: took 5m17.305262011s to StartCluster
	I0318 14:27:33.678999 1128583 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.679119 1128583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:27:33.681595 1128583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.681893 1128583 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:27:33.683724 1128583 out.go:177] * Verifying Kubernetes components...
	I0318 14:27:33.682059 1128583 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:27:33.682122 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:27:33.685123 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:27:33.685131 1128583 addons.go:69] Setting default-storageclass=true in profile "no-preload-188109"
	I0318 14:27:33.685135 1128583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188109"
	I0318 14:27:33.685139 1128583 addons.go:69] Setting metrics-server=true in profile "no-preload-188109"
	I0318 14:27:33.685165 1128583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188109"
	I0318 14:27:33.685173 1128583 addons.go:234] Setting addon metrics-server=true in "no-preload-188109"
	I0318 14:27:33.685175 1128583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-188109"
	W0318 14:27:33.685182 1128583 addons.go:243] addon metrics-server should already be in state true
	W0318 14:27:33.685185 1128583 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:27:33.685231 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685238 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685573 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685575 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685613 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685617 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685629 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685637 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.703022 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0318 14:27:33.703262 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 14:27:33.703844 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704181 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704628 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704649 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.704715 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704736 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.705213 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705374 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705809 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705863 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.705911 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705987 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.706076 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0318 14:27:33.706558 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.707198 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.707222 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.707699 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.708354 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.712289 1128583 addons.go:234] Setting addon default-storageclass=true in "no-preload-188109"
	W0318 14:27:33.712323 1128583 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:27:33.712364 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.712795 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.712833 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.724381 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0318 14:27:33.724980 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.725587 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.725614 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.726054 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.726363 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.727777 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0318 14:27:33.728182 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.728497 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.730538 1128583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:27:33.729152 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.730851 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0318 14:27:33.732037 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:27:33.732055 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:27:33.732076 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.732113 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.732489 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.732593 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.732881 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.732979 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.732991 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.733604 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.734297 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.734329 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.735399 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.737266 1128583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:27:33.735988 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.736830 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.739081 1128583 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:33.739098 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:27:33.737327 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.739122 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.739142 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.740009 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.740263 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.740482 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.742702 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743181 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.743211 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743473 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.743706 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.743902 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.744097 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.752903 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0318 14:27:33.756275 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.756901 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.756932 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.757363 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.757603 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.759471 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.759732 1128583 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:33.759751 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:27:33.759772 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.762687 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763139 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.763162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763414 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.763599 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.763765 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.763919 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.942490 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:27:33.975796 1128583 node_ready.go:35] waiting up to 6m0s for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008100 1128583 node_ready.go:49] node "no-preload-188109" has status "Ready":"True"
	I0318 14:27:34.008135 1128583 node_ready.go:38] duration metric: took 32.281068ms for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008149 1128583 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:34.039370 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:34.067765 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:27:34.067798 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:27:34.088294 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:34.091931 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:34.121689 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:27:34.121722 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:27:34.183609 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:34.183638 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:27:34.264906 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:35.590900 1128583 pod_ready.go:92] pod "coredns-76f75df574-jk9v5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.590928 1128583 pod_ready.go:81] duration metric: took 1.551526097s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.590938 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605647 1128583 pod_ready.go:92] pod "coredns-76f75df574-xczpc" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.605675 1128583 pod_ready.go:81] duration metric: took 14.730232ms for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605685 1128583 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.613213 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.521243904s)
	I0318 14:27:35.613276 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613289 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613282 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.524948587s)
	I0318 14:27:35.613324 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.613811 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613813 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.613824 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613831 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614119 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614166 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614183 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.614191 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614192 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.614234 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614273 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614502 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614517 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.636576 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.636610 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.636920 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.636946 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.656945 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.656972 1128583 pod_ready.go:81] duration metric: took 51.280554ms for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.656983 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683260 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.683291 1128583 pod_ready.go:81] duration metric: took 26.301625ms for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683301 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.691855 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42688194s)
	I0318 14:27:35.691918 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.691934 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692300 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692325 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692336 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.692344 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692661 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.692701 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692709 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692721 1128583 addons.go:470] Verifying addon metrics-server=true in "no-preload-188109"
	I0318 14:27:35.694758 1128583 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:27:35.696004 1128583 addons.go:505] duration metric: took 2.013954954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:27:35.709010 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.709035 1128583 pod_ready.go:81] duration metric: took 25.726967ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.709045 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982032 1128583 pod_ready.go:92] pod "kube-proxy-qpxx5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.982080 1128583 pod_ready.go:81] duration metric: took 273.026354ms for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982094 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380184 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:36.380228 1128583 pod_ready.go:81] duration metric: took 398.123566ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380241 1128583 pod_ready.go:38] duration metric: took 2.372078145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:36.380264 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:27:36.380334 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:27:36.401316 1128583 api_server.go:72] duration metric: took 2.719374991s to wait for apiserver process to appear ...
	I0318 14:27:36.401358 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:27:36.401389 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:27:36.407212 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:27:36.408930 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:27:36.408966 1128583 api_server.go:131] duration metric: took 7.597771ms to wait for apiserver health ...
	I0318 14:27:36.408989 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:27:36.583053 1128583 system_pods.go:59] 9 kube-system pods found
	I0318 14:27:36.583099 1128583 system_pods.go:61] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.583107 1128583 system_pods.go:61] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.583112 1128583 system_pods.go:61] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.583116 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.583120 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.583123 1128583 system_pods.go:61] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.583127 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.583134 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.583138 1128583 system_pods.go:61] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.583147 1128583 system_pods.go:74] duration metric: took 174.139423ms to wait for pod list to return data ...
	I0318 14:27:36.583156 1128583 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:27:36.779733 1128583 default_sa.go:45] found service account: "default"
	I0318 14:27:36.779771 1128583 default_sa.go:55] duration metric: took 196.607194ms for default service account to be created ...
	I0318 14:27:36.779783 1128583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:27:36.982750 1128583 system_pods.go:86] 9 kube-system pods found
	I0318 14:27:36.982783 1128583 system_pods.go:89] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.982789 1128583 system_pods.go:89] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.982793 1128583 system_pods.go:89] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.982798 1128583 system_pods.go:89] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.982804 1128583 system_pods.go:89] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.982808 1128583 system_pods.go:89] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.982812 1128583 system_pods.go:89] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.982819 1128583 system_pods.go:89] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.982823 1128583 system_pods.go:89] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.982832 1128583 system_pods.go:126] duration metric: took 203.042771ms to wait for k8s-apps to be running ...
	I0318 14:27:36.982839 1128583 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:27:36.982902 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:37.000948 1128583 system_svc.go:56] duration metric: took 18.09435ms WaitForService to wait for kubelet
	I0318 14:27:37.000980 1128583 kubeadm.go:576] duration metric: took 3.319049387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:27:37.001005 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:27:37.180608 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:27:37.180639 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:27:37.180652 1128583 node_conditions.go:105] duration metric: took 179.641912ms to run NodePressure ...
	I0318 14:27:37.180665 1128583 start.go:240] waiting for startup goroutines ...
	I0318 14:27:37.180672 1128583 start.go:245] waiting for cluster config update ...
	I0318 14:27:37.180686 1128583 start.go:254] writing updated cluster config ...
	I0318 14:27:37.181004 1128583 ssh_runner.go:195] Run: rm -f paused
	I0318 14:27:37.236286 1128583 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:27:37.238455 1128583 out.go:177] * Done! kubectl is now configured to use "no-preload-188109" cluster and "default" namespace by default
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.406943258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772522406914859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d2d5d6c-2cfa-4d2f-9783-3a8be109d72f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.407713264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07760f0b-29de-4564-8181-c77d68eb332b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.407809606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07760f0b-29de-4564-8181-c77d68eb332b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.408148899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07760f0b-29de-4564-8181-c77d68eb332b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.452214629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c7ad47b-dafc-4a35-bedd-ad0aae139cd0 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.452333205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c7ad47b-dafc-4a35-bedd-ad0aae139cd0 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.453747602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7eca7518-f6ef-4996-9225-0e27e9b0e142 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.454148315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772522454126991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7eca7518-f6ef-4996-9225-0e27e9b0e142 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.456718885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50e6f066-c41c-4cfd-bffe-fa1bd0da2af1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.456964806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50e6f066-c41c-4cfd-bffe-fa1bd0da2af1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.457507148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50e6f066-c41c-4cfd-bffe-fa1bd0da2af1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.510957912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dbb262c-5b1a-4a7b-8abd-0a74adf362f7 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.511090596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dbb262c-5b1a-4a7b-8abd-0a74adf362f7 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.512717900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9c0b076-4111-4b32-89ef-01efebd6a40c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.513290617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772522513261344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9c0b076-4111-4b32-89ef-01efebd6a40c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.513981727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceadd98e-8a59-4c98-893f-9b8476a31965 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.514060257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceadd98e-8a59-4c98-893f-9b8476a31965 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.514260280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ceadd98e-8a59-4c98-893f-9b8476a31965 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.551596141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8dc46f5-8a14-4cba-bb5b-2bef37044d4f name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.551689037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8dc46f5-8a14-4cba-bb5b-2bef37044d4f name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.552928574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e039827c-a886-4066-8598-2c75839cac25 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.553521676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772522553484952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e039827c-a886-4066-8598-2c75839cac25 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.554148136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=143f8306-8e6a-4d6e-8a45-a0e40f8b715a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.554204792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=143f8306-8e6a-4d6e-8a45-a0e40f8b715a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:22 embed-certs-767719 crio[694]: time="2024-03-18 14:35:22.554476241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=143f8306-8e6a-4d6e-8a45-a0e40f8b715a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1344f16b5555a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   169cb175ee20d       storage-provisioner
	12b542e08e9c0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   9100da1d021db       coredns-5dd5756b68-fm52r
	e846717910305       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   fd6356e98c68f       coredns-5dd5756b68-4knv5
	9668408c6e663       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   97c5545ad2581       kube-proxy-f4547
	a7a1b030bde32       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   18165a55b415b       kube-scheduler-embed-certs-767719
	5c340aa58af70       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   d825a07ba7fca       etcd-embed-certs-767719
	f219192018995       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   99b1a503f3be3       kube-apiserver-embed-certs-767719
	43bf3db5a22cc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   5e36507d9de51       kube-controller-manager-embed-certs-767719
	3290b4ce5efac       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Exited              kube-apiserver            1                   a1b555e34c58a       kube-apiserver-embed-certs-767719
	
	
	==> coredns [12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-767719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-767719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=embed-certs-767719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:26:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-767719
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:35:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:31:31 +0000   Mon, 18 Mar 2024 14:25:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:31:31 +0000   Mon, 18 Mar 2024 14:25:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:31:31 +0000   Mon, 18 Mar 2024 14:25:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:31:31 +0000   Mon, 18 Mar 2024 14:26:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.45
	  Hostname:    embed-certs-767719
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13390a57d52543dbaf6fbe438b5b11b5
	  System UUID:                13390a57-d525-43db-af6f-be438b5b11b5
	  Boot ID:                    23ea0ecf-773f-457f-96a0-b747992c8e2e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4knv5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-5dd5756b68-fm52r                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-embed-certs-767719                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-embed-certs-767719             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-767719    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-f4547                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-767719             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-w8z6p               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-767719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-767719 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-767719 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s  kubelet          Node embed-certs-767719 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m19s  kubelet          Node embed-certs-767719 status is now: NodeReady
	  Normal  RegisteredNode           9m7s   node-controller  Node embed-certs-767719 event: Registered Node embed-certs-767719 in Controller
	
	
	==> dmesg <==
	[  +0.052942] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040960] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.576024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.893396] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.645190] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.411568] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.064737] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064767] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.225871] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.130563] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.266455] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +5.326384] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +0.064378] kauditd_printk_skb: 130 callbacks suppressed
	[Mar18 14:21] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +5.607731] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.787586] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 14:25] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.840373] systemd-fstab-generator[3399]: Ignoring "noauto" option for root device
	[  +4.756152] kauditd_printk_skb: 54 callbacks suppressed
	[Mar18 14:26] systemd-fstab-generator[3723]: Ignoring "noauto" option for root device
	[ +13.005742] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.089530] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 14:27] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc] <==
	{"level":"info","ts":"2024-03-18T14:25:57.79329Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T14:25:57.795909Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a84d3445f2145a16","initial-advertise-peer-urls":["https://192.168.72.45:2380"],"listen-peer-urls":["https://192.168.72.45:2380"],"advertise-client-urls":["https://192.168.72.45:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.45:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T14:25:57.796177Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T14:25:57.793315Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.45:2380"}
	{"level":"info","ts":"2024-03-18T14:25:57.799955Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.45:2380"}
	{"level":"info","ts":"2024-03-18T14:25:57.79327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 switched to configuration voters=(12127406846597421590)"}
	{"level":"info","ts":"2024-03-18T14:25:57.800459Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"45408efcc8fb3821","local-member-id":"a84d3445f2145a16","added-peer-id":"a84d3445f2145a16","added-peer-peer-urls":["https://192.168.72.45:2380"]}
	{"level":"info","ts":"2024-03-18T14:25:57.959616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T14:25:57.95979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T14:25:57.959919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 received MsgPreVoteResp from a84d3445f2145a16 at term 1"}
	{"level":"info","ts":"2024-03-18T14:25:57.960023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.960051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 received MsgVoteResp from a84d3445f2145a16 at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.960161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.960197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a84d3445f2145a16 elected leader a84d3445f2145a16 at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.964915Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:25:57.967712Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a84d3445f2145a16","local-member-attributes":"{Name:embed-certs-767719 ClientURLs:[https://192.168.72.45:2379]}","request-path":"/0/members/a84d3445f2145a16/attributes","cluster-id":"45408efcc8fb3821","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:25:57.969456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:25:57.970689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.45:2379"}
	{"level":"info","ts":"2024-03-18T14:25:57.974496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:25:57.975604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T14:25:57.982457Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:25:57.982552Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:25:57.982809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45408efcc8fb3821","local-member-id":"a84d3445f2145a16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:25:57.982909Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:25:57.982937Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 14:35:22 up 14 min,  0 users,  load average: 0.25, 0.26, 0.21
	Linux embed-certs-767719 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd] <==
	W0318 14:25:49.829714       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:49.910799       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.010863       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.209886       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.235635       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.306936       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.400778       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.430156       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.441319       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.548084       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.594705       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.604653       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.802992       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.899331       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.921985       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.057366       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.107554       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.138251       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.164040       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.189613       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.281240       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.281644       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.480711       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.619661       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.665695       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13] <==
	W0318 14:31:01.186504       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 14:31:01.186510       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:31:01.186834       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:31:01.186863       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 14:31:01.186910       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:31:01.188823       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:32:00.091308       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:32:01.186992       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:01.187204       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:32:01.187239       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:32:01.189162       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:01.189265       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:32:01.189292       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:33:00.090571       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:34:00.090216       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:34:01.187796       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:34:01.187939       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:34:01.187948       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:34:01.190318       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:34:01.190360       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:34:01.190366       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:35:00.090800       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67] <==
	I0318 14:29:45.621533       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:30:15.180671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:30:15.631359       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:30:45.187086       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:30:45.641677       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:31:15.194875       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:31:15.652781       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:31:45.203469       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:31:45.662540       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:32:09.707051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="513.583µs"
	E0318 14:32:15.212701       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:32:15.672550       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:32:24.701478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="189.309µs"
	E0318 14:32:45.220561       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:32:45.686140       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:33:15.228923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:33:15.695138       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:33:45.235700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:33:45.704052       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:34:15.243482       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:34:15.714513       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:34:45.251778       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:34:45.723502       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:35:15.258059       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:35:15.732713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a] <==
	I0318 14:26:16.980114       1 server_others.go:69] "Using iptables proxy"
	I0318 14:26:17.204468       1 node.go:141] Successfully retrieved node IP: 192.168.72.45
	I0318 14:26:17.563658       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 14:26:17.563759       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 14:26:17.571611       1 server_others.go:152] "Using iptables Proxier"
	I0318 14:26:17.572335       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 14:26:17.573273       1 server.go:846] "Version info" version="v1.28.4"
	I0318 14:26:17.573478       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:26:17.575853       1 config.go:188] "Starting service config controller"
	I0318 14:26:17.576859       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 14:26:17.576937       1 config.go:97] "Starting endpoint slice config controller"
	I0318 14:26:17.576968       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 14:26:17.578883       1 config.go:315] "Starting node config controller"
	I0318 14:26:17.580783       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 14:26:17.677474       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 14:26:17.680945       1 shared_informer.go:318] Caches are synced for node config
	I0318 14:26:17.680975       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a] <==
	W0318 14:26:01.087721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:01.087788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:01.105867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 14:26:01.105929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 14:26:01.123367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 14:26:01.123482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 14:26:01.232802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:01.232848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:01.238454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 14:26:01.238521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 14:26:01.269497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 14:26:01.269546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 14:26:01.290370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 14:26:01.290524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 14:26:01.413472       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 14:26:01.413876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 14:26:01.432878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 14:26:01.433016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 14:26:01.455471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 14:26:01.455609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 14:26:01.532229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:01.532296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:01.652797       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 14:26:01.652850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 14:26:04.360603       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:33:03 embed-certs-767719 kubelet[3730]: E0318 14:33:03.741878    3730 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:33:03 embed-certs-767719 kubelet[3730]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:33:03 embed-certs-767719 kubelet[3730]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:33:03 embed-certs-767719 kubelet[3730]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:33:03 embed-certs-767719 kubelet[3730]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:33:17 embed-certs-767719 kubelet[3730]: E0318 14:33:17.685500    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:33:32 embed-certs-767719 kubelet[3730]: E0318 14:33:32.684682    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:33:45 embed-certs-767719 kubelet[3730]: E0318 14:33:45.684195    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:33:58 embed-certs-767719 kubelet[3730]: E0318 14:33:58.684657    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:34:03 embed-certs-767719 kubelet[3730]: E0318 14:34:03.737780    3730 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:34:03 embed-certs-767719 kubelet[3730]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:34:03 embed-certs-767719 kubelet[3730]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:34:03 embed-certs-767719 kubelet[3730]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:34:03 embed-certs-767719 kubelet[3730]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:34:13 embed-certs-767719 kubelet[3730]: E0318 14:34:13.685112    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:34:24 embed-certs-767719 kubelet[3730]: E0318 14:34:24.683681    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:34:35 embed-certs-767719 kubelet[3730]: E0318 14:34:35.684355    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:34:49 embed-certs-767719 kubelet[3730]: E0318 14:34:49.683974    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:35:03 embed-certs-767719 kubelet[3730]: E0318 14:35:03.739276    3730 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:35:03 embed-certs-767719 kubelet[3730]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:35:03 embed-certs-767719 kubelet[3730]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:35:03 embed-certs-767719 kubelet[3730]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:35:03 embed-certs-767719 kubelet[3730]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:35:04 embed-certs-767719 kubelet[3730]: E0318 14:35:04.685475    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:35:17 embed-certs-767719 kubelet[3730]: E0318 14:35:17.684581    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	
	
	==> storage-provisioner [1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07] <==
	I0318 14:26:19.241097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 14:26:19.251946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 14:26:19.252203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 14:26:19.271067       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 14:26:19.271456       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-767719_1b608b2e-b75b-4b86-a809-9846c8e1406b!
	I0318 14:26:19.272811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f704de54-9d45-4216-9e7b-770f62932150", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-767719_1b608b2e-b75b-4b86-a809-9846c8e1406b became leader
	I0318 14:26:19.371662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-767719_1b608b2e-b75b-4b86-a809-9846c8e1406b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767719 -n embed-certs-767719
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-767719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-w8z6p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-767719 describe pod metrics-server-57f55c9bc5-w8z6p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-767719 describe pod metrics-server-57f55c9bc5-w8z6p: exit status 1 (65.43061ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-w8z6p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-767719 describe pod metrics-server-57f55c9bc5-w8z6p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:35:42.455947549 +0000 UTC m=+6660.249406756
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-075922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-075922 logs -n 25: (2.115746688s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo find                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo find                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:17:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:24.228169 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:17:27.300185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:33.380164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:36.452102 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:42.536087 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:45.604211 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:51.684168 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:54.756227 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:00.836108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:03.908246 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:09.988223 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:13.060123 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:19.140179 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:22.212209 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:28.292206 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:31.364121 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:37.444195 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:40.516108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:46.596160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:49.668120 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:55.748134 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:58.820202 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:04.900183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:07.972128 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:14.052140 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:17.124242 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:23.204175 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:26.276172 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:32.356183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:35.428256 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:41.508181 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:44.580142 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:50.660193 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:53.732160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:59.812151 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:02.884164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:08.964174 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:12.036185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:18.116178 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:21.188147 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:27.268137 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:30.340177 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:33.345074 1128788 start.go:364] duration metric: took 4m12.599457373s to acquireMachinesLock for "embed-certs-767719"
	I0318 14:20:33.345136 1128788 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:33.345145 1128788 fix.go:54] fixHost starting: 
	I0318 14:20:33.345584 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:33.345638 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:33.362007 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0318 14:20:33.362504 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:33.363014 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:20:33.363037 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:33.363432 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:33.363634 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:33.363787 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:20:33.365593 1128788 fix.go:112] recreateIfNeeded on embed-certs-767719: state=Stopped err=<nil>
	I0318 14:20:33.365619 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	W0318 14:20:33.365792 1128788 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:33.367525 1128788 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767719" ...
	I0318 14:20:33.368930 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Start
	I0318 14:20:33.369145 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring networks are active...
	I0318 14:20:33.370041 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network default is active
	I0318 14:20:33.370474 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network mk-embed-certs-767719 is active
	I0318 14:20:33.370832 1128788 main.go:141] libmachine: (embed-certs-767719) Getting domain xml...
	I0318 14:20:33.371609 1128788 main.go:141] libmachine: (embed-certs-767719) Creating domain...
	I0318 14:20:34.596425 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting to get IP...
	I0318 14:20:34.597292 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.597677 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.597753 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.597666 1130210 retry.go:31] will retry after 244.312377ms: waiting for machine to come up
	I0318 14:20:34.843360 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.844039 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.844082 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.843988 1130210 retry.go:31] will retry after 388.782007ms: waiting for machine to come up
	I0318 14:20:35.234931 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.235304 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.235334 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.235252 1130210 retry.go:31] will retry after 449.871291ms: waiting for machine to come up
	I0318 14:20:33.342334 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:33.342408 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.342790 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:20:33.342823 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.343061 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:20:33.344920 1128583 machine.go:97] duration metric: took 4m37.408911801s to provisionDockerMachine
	I0318 14:20:33.344982 1128583 fix.go:56] duration metric: took 4m37.431584024s for fixHost
	I0318 14:20:33.344992 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 4m37.431613044s
	W0318 14:20:33.345017 1128583 start.go:713] error starting host: provision: host is not running
	W0318 14:20:33.345209 1128583 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 14:20:33.345223 1128583 start.go:728] Will try again in 5 seconds ...
	I0318 14:20:35.687048 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.687565 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.687604 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.687508 1130210 retry.go:31] will retry after 470.225551ms: waiting for machine to come up
	I0318 14:20:36.159138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.159642 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.159668 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.159590 1130210 retry.go:31] will retry after 638.634635ms: waiting for machine to come up
	I0318 14:20:36.799431 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.799820 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.799857 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.799764 1130210 retry.go:31] will retry after 758.659569ms: waiting for machine to come up
	I0318 14:20:37.559752 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:37.560189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:37.560224 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:37.560116 1130210 retry.go:31] will retry after 1.163344023s: waiting for machine to come up
	I0318 14:20:38.724981 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:38.725498 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:38.725561 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:38.725341 1130210 retry.go:31] will retry after 1.155934539s: waiting for machine to come up
	I0318 14:20:39.882622 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:39.883025 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:39.883074 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:39.882966 1130210 retry.go:31] will retry after 1.832023161s: waiting for machine to come up
	I0318 14:20:38.347296 1128583 start.go:360] acquireMachinesLock for no-preload-188109: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:20:41.717138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:41.717723 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:41.717757 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:41.717642 1130210 retry.go:31] will retry after 1.526824443s: waiting for machine to come up
	I0318 14:20:43.246389 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:43.246960 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:43.246997 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:43.246901 1130210 retry.go:31] will retry after 2.608273558s: waiting for machine to come up
	I0318 14:20:45.858375 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:45.858919 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:45.858943 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:45.858871 1130210 retry.go:31] will retry after 2.272908905s: waiting for machine to come up
	I0318 14:20:48.134345 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:48.134774 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:48.134826 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:48.134739 1130210 retry.go:31] will retry after 3.671073699s: waiting for machine to come up
	I0318 14:20:53.273198 1128964 start.go:364] duration metric: took 4m11.791347901s to acquireMachinesLock for "default-k8s-diff-port-075922"
	I0318 14:20:53.273284 1128964 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:53.273295 1128964 fix.go:54] fixHost starting: 
	I0318 14:20:53.273834 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:53.273879 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:53.291440 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0318 14:20:53.291988 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:53.292571 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:20:53.292605 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:53.292931 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:53.293125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:20:53.293278 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:20:53.294856 1128964 fix.go:112] recreateIfNeeded on default-k8s-diff-port-075922: state=Stopped err=<nil>
	I0318 14:20:53.294889 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	W0318 14:20:53.295063 1128964 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:53.297784 1128964 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075922" ...
	I0318 14:20:51.809859 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.810477 1128788 main.go:141] libmachine: (embed-certs-767719) Found IP for machine: 192.168.72.45
	I0318 14:20:51.810503 1128788 main.go:141] libmachine: (embed-certs-767719) Reserving static IP address...
	I0318 14:20:51.810518 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has current primary IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.811061 1128788 main.go:141] libmachine: (embed-certs-767719) Reserved static IP address: 192.168.72.45
	I0318 14:20:51.811104 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.811112 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting for SSH to be available...
	I0318 14:20:51.811137 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | skip adding static IP to network mk-embed-certs-767719 - found existing host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"}
	I0318 14:20:51.811163 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Getting to WaitForSSH function...
	I0318 14:20:51.813739 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814076 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.814121 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH client type: external
	I0318 14:20:51.814225 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa (-rw-------)
	I0318 14:20:51.814282 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:20:51.814327 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | About to run SSH command:
	I0318 14:20:51.814346 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | exit 0
	I0318 14:20:51.944192 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | SSH cmd err, output: <nil>: 
	I0318 14:20:51.944624 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetConfigRaw
	I0318 14:20:51.945477 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:51.948244 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.948667 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.948711 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.949069 1128788 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:20:51.949305 1128788 machine.go:94] provisionDockerMachine start ...
	I0318 14:20:51.949327 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:51.949596 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:51.952267 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952653 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.952703 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952836 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:51.953047 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953200 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953376 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:51.953525 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:51.953772 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:51.953785 1128788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:20:52.068806 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:20:52.068847 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069162 1128788 buildroot.go:166] provisioning hostname "embed-certs-767719"
	I0318 14:20:52.069198 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069500 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.072258 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072750 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.072785 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072939 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.073146 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073312 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073492 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.073730 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.073916 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.073934 1128788 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767719 && echo "embed-certs-767719" | sudo tee /etc/hostname
	I0318 14:20:52.204197 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767719
	
	I0318 14:20:52.204258 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.207520 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.207927 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.207959 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.208178 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.208478 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208740 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208961 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.209164 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.209352 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.209370 1128788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767719/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:20:52.337185 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:52.337220 1128788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:20:52.337243 1128788 buildroot.go:174] setting up certificates
	I0318 14:20:52.337253 1128788 provision.go:84] configureAuth start
	I0318 14:20:52.337264 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.337561 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:52.340693 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341061 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.341098 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341280 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.343239 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343570 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.343595 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343709 1128788 provision.go:143] copyHostCerts
	I0318 14:20:52.343782 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:20:52.343794 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:20:52.343888 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:20:52.344001 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:20:52.344010 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:20:52.344038 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:20:52.344095 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:20:52.344103 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:20:52.344126 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:20:52.344220 1128788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767719 san=[127.0.0.1 192.168.72.45 embed-certs-767719 localhost minikube]
	I0318 14:20:52.550241 1128788 provision.go:177] copyRemoteCerts
	I0318 14:20:52.550380 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:20:52.550433 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.553182 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553591 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.553626 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553824 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.554056 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.554241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.554392 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:52.645341 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:20:52.672476 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:20:52.698609 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:20:52.724434 1128788 provision.go:87] duration metric: took 387.165868ms to configureAuth
	I0318 14:20:52.724471 1128788 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:20:52.724727 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:20:52.724827 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.727323 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727700 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.727764 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727882 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.728098 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728443 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.728626 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.728859 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.728878 1128788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:20:53.012918 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:20:53.012959 1128788 machine.go:97] duration metric: took 1.063639009s to provisionDockerMachine
	I0318 14:20:53.012976 1128788 start.go:293] postStartSetup for "embed-certs-767719" (driver="kvm2")
	I0318 14:20:53.012990 1128788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:20:53.013039 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.013471 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:20:53.013505 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.016524 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.016929 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.016961 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.017153 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.017372 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.017582 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.017846 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.107977 1128788 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:20:53.113146 1128788 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:20:53.113184 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:20:53.113302 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:20:53.113423 1128788 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:20:53.113558 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:20:53.125166 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:53.152094 1128788 start.go:296] duration metric: took 139.099686ms for postStartSetup
	I0318 14:20:53.152147 1128788 fix.go:56] duration metric: took 19.807001958s for fixHost
	I0318 14:20:53.152194 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.155058 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155371 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.155401 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155643 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.155908 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156138 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156307 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.156536 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:53.156770 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:53.156786 1128788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:20:53.272998 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771653.240528844
	
	I0318 14:20:53.273029 1128788 fix.go:216] guest clock: 1710771653.240528844
	I0318 14:20:53.273046 1128788 fix.go:229] Guest: 2024-03-18 14:20:53.240528844 +0000 UTC Remote: 2024-03-18 14:20:53.15215228 +0000 UTC m=+272.563569050 (delta=88.376564ms)
	I0318 14:20:53.273075 1128788 fix.go:200] guest clock delta is within tolerance: 88.376564ms
	I0318 14:20:53.273083 1128788 start.go:83] releasing machines lock for "embed-certs-767719", held for 19.927965733s
	I0318 14:20:53.273118 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.273431 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:53.276309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276740 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.276768 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276958 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277493 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277716 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277806 1128788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:20:53.277851 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.277976 1128788 ssh_runner.go:195] Run: cat /version.json
	I0318 14:20:53.278002 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.280799 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.280853 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281234 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281263 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281289 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281518 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281616 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281767 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281850 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281945 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282028 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282090 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.282179 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.386584 1128788 ssh_runner.go:195] Run: systemctl --version
	I0318 14:20:53.393371 1128788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:20:53.547565 1128788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:20:53.554182 1128788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:20:53.554266 1128788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:20:53.573031 1128788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:20:53.573071 1128788 start.go:494] detecting cgroup driver to use...
	I0318 14:20:53.573197 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:20:53.591649 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:20:53.607279 1128788 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:20:53.607359 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:20:53.624327 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:20:53.640398 1128788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:20:53.759979 1128788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:20:53.931294 1128788 docker.go:233] disabling docker service ...
	I0318 14:20:53.931381 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:20:53.954433 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:20:53.969396 1128788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:20:54.107898 1128788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:20:54.241874 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:20:54.257748 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:20:54.278981 1128788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:20:54.279057 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.293329 1128788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:20:54.293390 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.304838 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.316646 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.328623 1128788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:20:54.340540 1128788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:20:54.352368 1128788 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:20:54.352433 1128788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:20:54.368965 1128788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:20:54.389268 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:54.511182 1128788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:20:54.657685 1128788 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:20:54.657798 1128788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:20:54.663591 1128788 start.go:562] Will wait 60s for crictl version
	I0318 14:20:54.663670 1128788 ssh_runner.go:195] Run: which crictl
	I0318 14:20:54.667903 1128788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:20:54.707961 1128788 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:20:54.708065 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.738240 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.773562 1128788 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:20:54.775286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:54.778784 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779228 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:54.779265 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779498 1128788 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:20:54.784575 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:54.799207 1128788 kubeadm.go:877] updating cluster {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:20:54.799380 1128788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:20:54.799440 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:54.839309 1128788 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:20:54.839387 1128788 ssh_runner.go:195] Run: which lz4
	I0318 14:20:54.844323 1128788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:20:54.850487 1128788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:20:54.850524 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:20:53.299380 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Start
	I0318 14:20:53.299595 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring networks are active...
	I0318 14:20:53.300497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network default is active
	I0318 14:20:53.300887 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network mk-default-k8s-diff-port-075922 is active
	I0318 14:20:53.301316 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Getting domain xml...
	I0318 14:20:53.302079 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Creating domain...
	I0318 14:20:54.607619 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting to get IP...
	I0318 14:20:54.608510 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609075 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.609050 1130331 retry.go:31] will retry after 282.377323ms: waiting for machine to come up
	I0318 14:20:54.892766 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893323 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.893259 1130331 retry.go:31] will retry after 264.840581ms: waiting for machine to come up
	I0318 14:20:55.160018 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160536 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160578 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.160460 1130331 retry.go:31] will retry after 402.458985ms: waiting for machine to come up
	I0318 14:20:55.564282 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564773 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.564727 1130331 retry.go:31] will retry after 382.70672ms: waiting for machine to come up
	I0318 14:20:55.949676 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950183 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950218 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.950122 1130331 retry.go:31] will retry after 676.466466ms: waiting for machine to come up
	I0318 14:20:56.798325 1128788 crio.go:444] duration metric: took 1.954051074s to copy over tarball
	I0318 14:20:56.798418 1128788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:20:59.431722 1128788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633260911s)
	I0318 14:20:59.431777 1128788 crio.go:451] duration metric: took 2.633417573s to extract the tarball
	I0318 14:20:59.431788 1128788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:20:59.476265 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:59.534130 1128788 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:20:59.534161 1128788 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:20:59.534173 1128788 kubeadm.go:928] updating node { 192.168.72.45 8443 v1.28.4 crio true true} ...
	I0318 14:20:59.534357 1128788 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:20:59.534499 1128788 ssh_runner.go:195] Run: crio config
	I0318 14:20:59.594778 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:20:59.594814 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:20:59.594831 1128788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:20:59.594894 1128788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767719 NodeName:embed-certs-767719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:20:59.595092 1128788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:20:59.595203 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:20:59.610298 1128788 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:20:59.610388 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:20:59.624050 1128788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 14:20:59.644283 1128788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:20:59.663987 1128788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0318 14:20:59.685379 1128788 ssh_runner.go:195] Run: grep 192.168.72.45	control-plane.minikube.internal$ /etc/hosts
	I0318 14:20:59.690360 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:59.705657 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:59.839158 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:20:59.857617 1128788 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719 for IP: 192.168.72.45
	I0318 14:20:59.857642 1128788 certs.go:194] generating shared ca certs ...
	I0318 14:20:59.857674 1128788 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:20:59.857839 1128788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:20:59.857882 1128788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:20:59.857893 1128788 certs.go:256] generating profile certs ...
	I0318 14:20:59.858006 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/client.key
	I0318 14:20:59.858061 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key.f59f641c
	I0318 14:20:59.858098 1128788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key
	I0318 14:20:59.858268 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:20:59.858301 1128788 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:20:59.858308 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:20:59.858331 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:20:59.858360 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:20:59.858382 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:20:59.858424 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:59.859110 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:20:59.901101 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:20:59.947010 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:20:59.990882 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:00.032358 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 14:21:00.070194 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:00.108670 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:00.137760 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:00.168481 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:00.199292 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:00.228315 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:00.257409 1128788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:00.277720 1128788 ssh_runner.go:195] Run: openssl version
	I0318 14:21:00.284138 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:00.296443 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302083 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302160 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.308748 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:00.322025 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:00.334654 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340319 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340404 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.347454 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:00.359627 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:00.371865 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377236 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377335 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.387041 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:00.404525 1128788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:00.412919 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:00.422577 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:00.434217 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:00.444535 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:00.452863 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:00.459979 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:00.467503 1128788 kubeadm.go:391] StartCluster: {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:00.467680 1128788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:00.467780 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.507833 1128788 cri.go:89] found id: ""
	I0318 14:21:00.507926 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:00.519958 1128788 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:00.519982 1128788 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:00.520011 1128788 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:00.520066 1128788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:00.532229 1128788 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:00.533479 1128788 kubeconfig.go:125] found "embed-certs-767719" server: "https://192.168.72.45:8443"
	I0318 14:21:00.536185 1128788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:00.548434 1128788 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.45
	I0318 14:21:00.548484 1128788 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:00.548498 1128788 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:00.548551 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.592096 1128788 cri.go:89] found id: ""
	I0318 14:21:00.592168 1128788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:00.610826 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:00.622294 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:00.622330 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:00.622386 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:00.633009 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:00.633089 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:20:56.628134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628708 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628747 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:56.628643 1130331 retry.go:31] will retry after 703.45784ms: waiting for machine to come up
	I0318 14:20:57.334203 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334666 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334702 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:57.334600 1130331 retry.go:31] will retry after 1.177266521s: waiting for machine to come up
	I0318 14:20:58.513803 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514452 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514485 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:58.514389 1130331 retry.go:31] will retry after 1.389627955s: waiting for machine to come up
	I0318 14:20:59.906109 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906663 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906750 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:59.906632 1130331 retry.go:31] will retry after 1.239662517s: waiting for machine to come up
	I0318 14:21:01.147929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148325 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:01.148248 1130331 retry.go:31] will retry after 2.183067358s: waiting for machine to come up
	I0318 14:21:00.644684 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:00.921213 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:00.921307 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:00.932412 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.943408 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:00.943481 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.955574 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:00.966416 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:00.966483 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:00.978014 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:00.993622 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:01.128726 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.331974 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.203164646s)
	I0318 14:21:02.332035 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.574592 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.686011 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.821189 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:02.821373 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.322200 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.822207 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.838586 1128788 api_server.go:72] duration metric: took 1.017395673s to wait for apiserver process to appear ...
	I0318 14:21:03.838622 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:03.838660 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.839282 1128788 api_server.go:269] stopped: https://192.168.72.45:8443/healthz: Get "https://192.168.72.45:8443/healthz": dial tcp 192.168.72.45:8443: connect: connection refused
	I0318 14:21:04.339675 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.333080 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333620 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333648 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:03.333583 1130331 retry.go:31] will retry after 2.259124316s: waiting for machine to come up
	I0318 14:21:05.594356 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594823 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:05.594754 1130331 retry.go:31] will retry after 2.492274875s: waiting for machine to come up
	I0318 14:21:07.054330 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:07.054373 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:07.054392 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.073841 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.073894 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.339285 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.345307 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.345340 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.838915 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.846722 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.846759 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:08.339409 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:08.344790 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:21:08.358050 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:08.358097 1128788 api_server.go:131] duration metric: took 4.519466088s to wait for apiserver health ...
	I0318 14:21:08.358121 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:21:08.358130 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:08.359982 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:08.361428 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:08.378195 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:08.409269 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:08.421874 1128788 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:08.421960 1128788 system_pods.go:61] "coredns-5dd5756b68-4dmw2" [324897fc-dd26-47f1-b8bc-4d2ed721a576] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:08.421971 1128788 system_pods.go:61] "etcd-embed-certs-767719" [df147cb8-989c-408d-ade8-547858a8c2bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:08.421982 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [82f7d170-3b3c-448c-b824-6d263c5c1128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:08.421989 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [cd4dd4f3-a727-4864-b0e9-a89758537de9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:08.422002 1128788 system_pods.go:61] "kube-proxy-mtx9w" [b46b48ff-e4c0-4595-82c4-7c0c86103262] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:08.422010 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [63774f42-c85e-467f-9bd3-0c78d44b2681] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:08.422022 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-jr9wp" [e40748e2-ebc3-4c4f-a9cc-01bbc7416f35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:08.422030 1128788 system_pods.go:61] "storage-provisioner" [1b51e6a7-2693-4d0b-b47e-ccbcb1e46424] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:08.422047 1128788 system_pods.go:74] duration metric: took 12.746875ms to wait for pod list to return data ...
	I0318 14:21:08.422058 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:08.432361 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:08.432461 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:08.432483 1128788 node_conditions.go:105] duration metric: took 10.415127ms to run NodePressure ...
	I0318 14:21:08.432524 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:08.730544 1128788 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:08.735970 1128788 kubeadm.go:733] kubelet initialised
	I0318 14:21:08.736001 1128788 kubeadm.go:734] duration metric: took 5.422027ms waiting for restarted kubelet to initialise ...
	I0318 14:21:08.736042 1128788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:08.745586 1128788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:08.090446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090834 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:08.090779 1130331 retry.go:31] will retry after 3.31085892s: waiting for machine to come up
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:11.405935 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has current primary IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406539 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Found IP for machine: 192.168.83.39
	I0318 14:21:11.406553 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserving static IP address...
	I0318 14:21:11.407015 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.407048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | skip adding static IP to network mk-default-k8s-diff-port-075922 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"}
	I0318 14:21:11.407066 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserved static IP address: 192.168.83.39
	I0318 14:21:11.407081 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for SSH to be available...
	I0318 14:21:11.407093 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Getting to WaitForSSH function...
	I0318 14:21:11.409327 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409674 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.409706 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409895 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH client type: external
	I0318 14:21:11.409919 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa (-rw-------)
	I0318 14:21:11.410034 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:11.410065 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | About to run SSH command:
	I0318 14:21:11.410089 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | exit 0
	I0318 14:21:11.544258 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:11.544698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetConfigRaw
	I0318 14:21:11.545370 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.548333 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.548729 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.548764 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.549053 1128964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:21:11.549275 1128964 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:11.549295 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:11.549533 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.551799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552156 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.552186 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552280 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.552482 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552657 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552797 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.552994 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.553261 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.553278 1128964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:11.665093 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:11.665132 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665456 1128964 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075922"
	I0318 14:21:11.665493 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665730 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.668911 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.669413 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669679 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.669923 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670319 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.670530 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.670718 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.670734 1128964 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075922 && echo "default-k8s-diff-port-075922" | sudo tee /etc/hostname
	I0318 14:21:11.807520 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075922
	
	I0318 14:21:11.807552 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.810614 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811011 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.811047 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811257 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.811480 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811941 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.812155 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.812361 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.812387 1128964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075922/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:11.942984 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:11.943022 1128964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:11.943078 1128964 buildroot.go:174] setting up certificates
	I0318 14:21:11.943094 1128964 provision.go:84] configureAuth start
	I0318 14:21:11.943108 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.943441 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.946659 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947091 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.947125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947328 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.949852 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950275 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.950310 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950496 1128964 provision.go:143] copyHostCerts
	I0318 14:21:11.950579 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:11.950596 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:11.950679 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:11.950859 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:11.950873 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:11.950898 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:11.950964 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:11.950971 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:11.950988 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:11.951041 1128964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075922 san=[127.0.0.1 192.168.83.39 default-k8s-diff-port-075922 localhost minikube]
	I0318 14:21:12.019678 1128964 provision.go:177] copyRemoteCerts
	I0318 14:21:12.019756 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:12.019788 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.023122 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023603 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.023639 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023862 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.024077 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.024294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.024445 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.112914 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:12.142575 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 14:21:12.171747 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:12.200144 1128964 provision.go:87] duration metric: took 257.034667ms to configureAuth
	I0318 14:21:12.200177 1128964 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:12.200401 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:21:12.200515 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.203573 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.203978 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.204019 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.204160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.204379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204658 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.205131 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.205335 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.205367 1128964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:12.494965 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:12.494997 1128964 machine.go:97] duration metric: took 945.707691ms to provisionDockerMachine
	I0318 14:21:12.495012 1128964 start.go:293] postStartSetup for "default-k8s-diff-port-075922" (driver="kvm2")
	I0318 14:21:12.495026 1128964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:12.495048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.495450 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:12.495486 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.498444 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.498821 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498928 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.499166 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.499363 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.499560 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.588350 1128964 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:12.593611 1128964 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:12.593638 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:12.593714 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:12.593788 1128964 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:12.593875 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:12.605751 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:12.633577 1128964 start.go:296] duration metric: took 138.54984ms for postStartSetup
	I0318 14:21:12.633621 1128964 fix.go:56] duration metric: took 19.360327899s for fixHost
	I0318 14:21:12.633645 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.636446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636822 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.636850 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636989 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.637237 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637428 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637596 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.637786 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.637988 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.638002 1128964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:12.749326 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771672.727120819
	
	I0318 14:21:12.749355 1128964 fix.go:216] guest clock: 1710771672.727120819
	I0318 14:21:12.749364 1128964 fix.go:229] Guest: 2024-03-18 14:21:12.727120819 +0000 UTC Remote: 2024-03-18 14:21:12.633625447 +0000 UTC m=+271.308784721 (delta=93.495372ms)
	I0318 14:21:12.749386 1128964 fix.go:200] guest clock delta is within tolerance: 93.495372ms
	I0318 14:21:12.749392 1128964 start.go:83] releasing machines lock for "default-k8s-diff-port-075922", held for 19.476136638s
	I0318 14:21:12.749418 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.749732 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:12.752996 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753471 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.753506 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753815 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754448 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754651 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754744 1128964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:12.754791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.754943 1128964 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:12.754970 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.758153 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758303 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758628 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758660 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758694 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758758 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758927 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758988 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759057 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759157 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759251 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759292 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.759371 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.841423 1128964 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:12.868154 1128964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:13.020652 1128964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:13.028168 1128964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:13.028267 1128964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:13.047225 1128964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:13.047264 1128964 start.go:494] detecting cgroup driver to use...
	I0318 14:21:13.047361 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:13.064518 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:13.080271 1128964 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:13.080356 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:13.095583 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:13.110387 1128964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:13.250934 1128964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:13.450657 1128964 docker.go:233] disabling docker service ...
	I0318 14:21:13.450738 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:13.471701 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:13.488157 1128964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:13.644961 1128964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:13.811333 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:13.828584 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:13.852476 1128964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:13.852557 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.864849 1128964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:13.864951 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.877723 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.890337 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.902558 1128964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:13.915858 1128964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:13.928426 1128964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:13.928526 1128964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:13.951761 1128964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:13.964785 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:14.144432 1128964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:14.311928 1128964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:14.312078 1128964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:14.319279 1128964 start.go:562] Will wait 60s for crictl version
	I0318 14:21:14.319347 1128964 ssh_runner.go:195] Run: which crictl
	I0318 14:21:14.325325 1128964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:14.385244 1128964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:14.385344 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.426242 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.460725 1128964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:21:10.753176 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:12.756558 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:13.760252 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:13.760295 1128788 pod_ready.go:81] duration metric: took 5.014671723s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:13.760315 1128788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:14.461974 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:14.465116 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465652 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:14.465696 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465903 1128964 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:14.471039 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:14.486098 1128964 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:14.486307 1128964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:21:14.486379 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:14.526373 1128964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:21:14.526476 1128964 ssh_runner.go:195] Run: which lz4
	I0318 14:21:14.531145 1128964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:14.536370 1128964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:14.536412 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:21:15.769517 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:17.772721 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:18.769552 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:18.769590 1128788 pod_ready.go:81] duration metric: took 5.009265127s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:18.769610 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:16.505094 1128964 crio.go:444] duration metric: took 1.973996063s to copy over tarball
	I0318 14:21:16.505250 1128964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:19.251009 1128964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.745717226s)
	I0318 14:21:19.251045 1128964 crio.go:451] duration metric: took 2.745895394s to extract the tarball
	I0318 14:21:19.251053 1128964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:19.308392 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:19.363143 1128964 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:21:19.363172 1128964 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:21:19.363181 1128964 kubeadm.go:928] updating node { 192.168.83.39 8444 v1.28.4 crio true true} ...
	I0318 14:21:19.363313 1128964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-075922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:19.363415 1128964 ssh_runner.go:195] Run: crio config
	I0318 14:21:19.415995 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:19.416028 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:19.416048 1128964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:19.416085 1128964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075922 NodeName:default-k8s-diff-port-075922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:21:19.416297 1128964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-075922"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:19.416379 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:21:19.427340 1128964 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:19.427420 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:19.438470 1128964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0318 14:21:19.459945 1128964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:19.479728 1128964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0318 14:21:19.500079 1128964 ssh_runner.go:195] Run: grep 192.168.83.39	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:19.504746 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:19.519931 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:19.654822 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:19.675414 1128964 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922 for IP: 192.168.83.39
	I0318 14:21:19.675443 1128964 certs.go:194] generating shared ca certs ...
	I0318 14:21:19.675462 1128964 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:19.675647 1128964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:19.675707 1128964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:19.675722 1128964 certs.go:256] generating profile certs ...
	I0318 14:21:19.675861 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/client.key
	I0318 14:21:19.683399 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key.675162fd
	I0318 14:21:19.683522 1128964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key
	I0318 14:21:19.683667 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:19.683715 1128964 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:19.683730 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:19.683782 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:19.683870 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:19.683897 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:19.683940 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:19.684679 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:19.743065 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:19.787963 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:19.833491 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:19.865359 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 14:21:19.903294 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:19.932298 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:19.961860 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 14:21:19.992150 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:20.020750 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:20.047780 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:20.074566 1128964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:20.094524 1128964 ssh_runner.go:195] Run: openssl version
	I0318 14:21:20.101181 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:20.118970 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124628 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124707 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.133462 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:20.150447 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:20.165864 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173488 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173627 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.183147 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:20.200417 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:20.213973 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219407 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219488 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.226491 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:20.240299 1128964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:20.245960 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:20.253073 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:20.260144 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:20.267546 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:20.274740 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:20.282502 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:20.289722 1128964 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:20.289817 1128964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:20.289877 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.338941 1128964 cri.go:89] found id: ""
	I0318 14:21:20.339036 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:20.350677 1128964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:20.350706 1128964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:20.350718 1128964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:20.350775 1128964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:20.362216 1128964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:20.363622 1128964 kubeconfig.go:125] found "default-k8s-diff-port-075922" server: "https://192.168.83.39:8444"
	I0318 14:21:20.366606 1128964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:20.379417 1128964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.39
	I0318 14:21:20.379460 1128964 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:20.379481 1128964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:20.379556 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.423139 1128964 cri.go:89] found id: ""
	I0318 14:21:20.423224 1128964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:20.444111 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:20.456698 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:20.456725 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:20.456787 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:21:20.467432 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:20.467502 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:20.478894 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:21:20.490123 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:20.490216 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:20.501744 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.514020 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:20.514084 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.526805 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:21:20.538374 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:20.538452 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:20.550880 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:20.562302 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:20.687288 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.085960 1128788 pod_ready.go:102] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:21.781260 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.781287 1128788 pod_ready.go:81] duration metric: took 3.011668835s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.781297 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789501 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.789537 1128788 pod_ready.go:81] duration metric: took 8.231402ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789552 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797445 1128788 pod_ready.go:92] pod "kube-proxy-mtx9w" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.797483 1128788 pod_ready.go:81] duration metric: took 7.921289ms for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797496 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804084 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.804120 1128788 pod_ready.go:81] duration metric: took 6.613559ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804132 1128788 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:23.812751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:21.432193 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.821781 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.899411 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.984494 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:21.984624 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.484985 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.985119 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:23.009700 1128964 api_server.go:72] duration metric: took 1.025195346s to wait for apiserver process to appear ...
	I0318 14:21:23.009739 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:23.009764 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:23.010328 1128964 api_server.go:269] stopped: https://192.168.83.39:8444/healthz: Get "https://192.168.83.39:8444/healthz": dial tcp 192.168.83.39:8444: connect: connection refused
	I0318 14:21:23.510606 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.307173 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.307217 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.307238 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.345507 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.345551 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.510350 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.515684 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:26.515721 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.010509 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.015492 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:27.015526 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.510772 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.520209 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:21:27.527945 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:27.527978 1128964 api_server.go:131] duration metric: took 4.518232257s to wait for apiserver health ...
	I0318 14:21:27.527988 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:27.527994 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:27.529779 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:26.313296 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:28.811916 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:27.531259 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:27.573198 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:27.619813 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:27.629766 1128964 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:27.629805 1128964 system_pods.go:61] "coredns-5dd5756b68-dsrcd" [86ac331d-2539-4fbb-8cf8-56f58afa6f99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:27.629815 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [0de3bd3b-6ee2-46e2-83f7-7c637115879f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:27.629821 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [e1e689c8-642c-428e-bddf-43c2c1524563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:27.629832 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [1a200d0f-53e6-4e44-a8b0-28b9d21f763e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:27.629837 1128964 system_pods.go:61] "kube-proxy-wbnvd" [6bf13050-a150-4133-93e2-71ddcad443ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:27.629842 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [87bc17b3-75c6-4d6b-9b8f-29823398100a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:27.629847 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-4vrvb" [d12dc531-720c-4a7a-93af-69b9005666fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:27.629852 1128964 system_pods.go:61] "storage-provisioner" [856896cd-daec-4873-8f9c-c7cadeb3c16e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:27.629857 1128964 system_pods.go:74] duration metric: took 10.000416ms to wait for pod list to return data ...
	I0318 14:21:27.629866 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:27.634112 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:27.634147 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:27.634159 1128964 node_conditions.go:105] duration metric: took 4.287491ms to run NodePressure ...
	I0318 14:21:27.634190 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:27.976277 1128964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980894 1128964 kubeadm.go:733] kubelet initialised
	I0318 14:21:27.980920 1128964 kubeadm.go:734] duration metric: took 4.609836ms waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980932 1128964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:27.986151 1128964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:29.993963 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:31.313401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:33.811753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:36.252999 1128583 start.go:364] duration metric: took 57.905644198s to acquireMachinesLock for "no-preload-188109"
	I0318 14:21:36.253054 1128583 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:36.253063 1128583 fix.go:54] fixHost starting: 
	I0318 14:21:36.253510 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:36.253545 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:36.271856 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0318 14:21:36.272254 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:36.272790 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:21:36.272822 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:36.273237 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:36.273446 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:36.273614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:21:36.275414 1128583 fix.go:112] recreateIfNeeded on no-preload-188109: state=Stopped err=<nil>
	I0318 14:21:36.275440 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	W0318 14:21:36.275623 1128583 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:36.277528 1128583 out.go:177] * Restarting existing kvm2 VM for "no-preload-188109" ...
	I0318 14:21:31.995770 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.495078 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:35.812465 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:37.820263 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.313683 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.278973 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Start
	I0318 14:21:36.279160 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring networks are active...
	I0318 14:21:36.280043 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network default is active
	I0318 14:21:36.280495 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network mk-no-preload-188109 is active
	I0318 14:21:36.281014 1128583 main.go:141] libmachine: (no-preload-188109) Getting domain xml...
	I0318 14:21:36.281995 1128583 main.go:141] libmachine: (no-preload-188109) Creating domain...
	I0318 14:21:37.644409 1128583 main.go:141] libmachine: (no-preload-188109) Waiting to get IP...
	I0318 14:21:37.645406 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.645958 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.646047 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.645922 1130597 retry.go:31] will retry after 223.965782ms: waiting for machine to come up
	I0318 14:21:37.871397 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.871933 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.871971 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.871882 1130597 retry.go:31] will retry after 272.743353ms: waiting for machine to come up
	I0318 14:21:38.146680 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.147278 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.147309 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.147211 1130597 retry.go:31] will retry after 414.468616ms: waiting for machine to come up
	I0318 14:21:38.563199 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.563768 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.563794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.563718 1130597 retry.go:31] will retry after 582.588791ms: waiting for machine to come up
	I0318 14:21:39.147611 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.148410 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.148436 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.148315 1130597 retry.go:31] will retry after 686.425224ms: waiting for machine to come up
	I0318 14:21:39.836964 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.837647 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.837677 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.837593 1130597 retry.go:31] will retry after 878.564369ms: waiting for machine to come up
	I0318 14:21:40.717644 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:40.718346 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:40.718380 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:40.718276 1130597 retry.go:31] will retry after 1.183201382s: waiting for machine to come up
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:36.995480 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:39.005382 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.495300 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.495345 1128964 pod_ready.go:81] duration metric: took 12.509166884s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.495358 1128964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504432 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.504467 1128964 pod_ready.go:81] duration metric: took 9.100778ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504480 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515466 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.515506 1128964 pod_ready.go:81] duration metric: took 11.017212ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515519 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525891 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.525929 1128964 pod_ready.go:81] duration metric: took 10.399892ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525943 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534161 1128964 pod_ready.go:92] pod "kube-proxy-wbnvd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.534196 1128964 pod_ready.go:81] duration metric: took 8.245545ms for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534208 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:42.314504 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:44.812532 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:41.902972 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:41.903707 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:41.903736 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:41.903670 1130597 retry.go:31] will retry after 1.282612289s: waiting for machine to come up
	I0318 14:21:43.188745 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:43.189303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:43.189332 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:43.189257 1130597 retry.go:31] will retry after 1.175485401s: waiting for machine to come up
	I0318 14:21:44.366602 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:44.367162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:44.367191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:44.367121 1130597 retry.go:31] will retry after 1.700678954s: waiting for machine to come up
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:41.690971 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:41.691009 1128964 pod_ready.go:81] duration metric: took 1.156786821s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:41.691020 1128964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:44.189110 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.201644 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.813954 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:48.817402 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.069196 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:46.069747 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:46.069797 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:46.069687 1130597 retry.go:31] will retry after 2.354521412s: waiting for machine to come up
	I0318 14:21:48.425714 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:48.426186 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:48.426219 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:48.426147 1130597 retry.go:31] will retry after 2.74319235s: waiting for machine to come up
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.699275 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:50.699690 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.311999 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:53.811066 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.173264 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.173844 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:51.173880 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:51.173784 1130597 retry.go:31] will retry after 4.489599719s: waiting for machine to come up
	I0318 14:21:55.665080 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665639 1128583 main.go:141] libmachine: (no-preload-188109) Found IP for machine: 192.168.61.40
	I0318 14:21:55.665675 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has current primary IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665686 1128583 main.go:141] libmachine: (no-preload-188109) Reserving static IP address...
	I0318 14:21:55.666111 1128583 main.go:141] libmachine: (no-preload-188109) Reserved static IP address: 192.168.61.40
	I0318 14:21:55.666149 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.666164 1128583 main.go:141] libmachine: (no-preload-188109) Waiting for SSH to be available...
	I0318 14:21:55.666191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | skip adding static IP to network mk-no-preload-188109 - found existing host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"}
	I0318 14:21:55.666205 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Getting to WaitForSSH function...
	I0318 14:21:55.668473 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668792 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.668837 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668947 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH client type: external
	I0318 14:21:55.668989 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa (-rw-------)
	I0318 14:21:55.669020 1128583 main.go:141] libmachine: (no-preload-188109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:55.669043 1128583 main.go:141] libmachine: (no-preload-188109) DBG | About to run SSH command:
	I0318 14:21:55.669095 1128583 main.go:141] libmachine: (no-preload-188109) DBG | exit 0
	I0318 14:21:55.796228 1128583 main.go:141] libmachine: (no-preload-188109) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:55.796668 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetConfigRaw
	I0318 14:21:55.797378 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:55.800241 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.800716 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.800771 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.801150 1128583 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:21:55.801416 1128583 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:55.801441 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:55.801690 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.804667 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.700698 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.198658 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.805029 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.805269 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.806759 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.806983 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807220 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807421 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.807623 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.807952 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.807982 1128583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:55.920939 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:55.920993 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921259 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:21:55.921292 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921510 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.924430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.924921 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.924962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.925153 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.925431 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925792 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.926029 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.926301 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.926320 1128583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-188109 && echo "no-preload-188109" | sudo tee /etc/hostname
	I0318 14:21:56.051873 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-188109
	
	I0318 14:21:56.051915 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.055015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055387 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.055422 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055659 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.055887 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056058 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056190 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.056318 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.056508 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.056525 1128583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188109/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:56.178366 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:56.178401 1128583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:56.178443 1128583 buildroot.go:174] setting up certificates
	I0318 14:21:56.178454 1128583 provision.go:84] configureAuth start
	I0318 14:21:56.178465 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:56.178859 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:56.181995 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.182457 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182724 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.185337 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185623 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.185649 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185880 1128583 provision.go:143] copyHostCerts
	I0318 14:21:56.185968 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:56.185983 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:56.186073 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:56.186249 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:56.186264 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:56.186296 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:56.186392 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:56.186406 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:56.186432 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:56.186511 1128583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.no-preload-188109 san=[127.0.0.1 192.168.61.40 localhost minikube no-preload-188109]
	I0318 14:21:56.332196 1128583 provision.go:177] copyRemoteCerts
	I0318 14:21:56.332267 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:56.332295 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.335310 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335604 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.335639 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335787 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.336002 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.336170 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.336310 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.427529 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:56.459132 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:21:56.488690 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:56.516043 1128583 provision.go:87] duration metric: took 337.568576ms to configureAuth
	I0318 14:21:56.516088 1128583 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:56.516309 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:21:56.516457 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.519576 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.519998 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.520059 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.520237 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.520460 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520677 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520876 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.521065 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.521290 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.521307 1128583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:56.831034 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:56.831076 1128583 machine.go:97] duration metric: took 1.029643209s to provisionDockerMachine
	I0318 14:21:56.831092 1128583 start.go:293] postStartSetup for "no-preload-188109" (driver="kvm2")
	I0318 14:21:56.831107 1128583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:56.831126 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:56.831549 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:56.831611 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.834520 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.834962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.834992 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.835234 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.835415 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.835582 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.835743 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.927694 1128583 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:56.932973 1128583 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:56.933002 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:56.933088 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:56.933200 1128583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:56.933345 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:56.943594 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:56.971483 1128583 start.go:296] duration metric: took 140.368525ms for postStartSetup
	I0318 14:21:56.971564 1128583 fix.go:56] duration metric: took 20.718501273s for fixHost
	I0318 14:21:56.971618 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.974721 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975185 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.975250 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975409 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.975679 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.975885 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.976049 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.976242 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.976438 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.976453 1128583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:57.089795 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771717.066528661
	
	I0318 14:21:57.089823 1128583 fix.go:216] guest clock: 1710771717.066528661
	I0318 14:21:57.089834 1128583 fix.go:229] Guest: 2024-03-18 14:21:57.066528661 +0000 UTC Remote: 2024-03-18 14:21:56.971568576 +0000 UTC m=+361.214853207 (delta=94.960085ms)
	I0318 14:21:57.089865 1128583 fix.go:200] guest clock delta is within tolerance: 94.960085ms
	I0318 14:21:57.089873 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 20.836840869s
	I0318 14:21:57.089898 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.090297 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:57.094015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094517 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.094563 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094920 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095607 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095844 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095978 1128583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:57.096034 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.096182 1128583 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:57.096221 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.099303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099329 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099754 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099854 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099869 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.100103 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100118 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100339 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100568 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100578 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100766 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.100781 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.203060 1128583 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:57.209943 1128583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:57.368686 1128583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:57.376289 1128583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:57.376375 1128583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:57.394365 1128583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:57.394405 1128583 start.go:494] detecting cgroup driver to use...
	I0318 14:21:57.394488 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:57.412172 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:57.428895 1128583 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:57.428988 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:57.445064 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:57.461255 1128583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:57.596381 1128583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:57.774782 1128583 docker.go:233] disabling docker service ...
	I0318 14:21:57.774890 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:57.791820 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:57.807412 1128583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:57.961890 1128583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:58.118122 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:58.133994 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:58.155336 1128583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:58.155429 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.167537 1128583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:58.167642 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.180814 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.193997 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.206817 1128583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:58.220843 1128583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:58.232012 1128583 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:58.232073 1128583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:58.246610 1128583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:58.260393 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:58.416723 1128583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:58.588776 1128583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:58.588864 1128583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:58.594689 1128583 start.go:562] Will wait 60s for crictl version
	I0318 14:21:58.594787 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:58.599287 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:58.634954 1128583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:58.635059 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.667031 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.703316 1128583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:21:55.812079 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:57.813027 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.310988 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:58.704763 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:58.708030 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708495 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:58.708527 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708738 1128583 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:58.713408 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:58.726934 1128583 kubeadm.go:877] updating cluster {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:58.727067 1128583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:21:58.727105 1128583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:58.764875 1128583 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:21:58.764904 1128583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:58.764976 1128583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.765019 1128583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.765091 1128583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.765117 1128583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.765142 1128583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.765158 1128583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.765125 1128583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.765098 1128583 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766495 1128583 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766589 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.766592 1128583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.766768 1128583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.766924 1128583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.766492 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.919274 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 14:21:58.934955 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.945887 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.954907 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.961334 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.976485 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.991515 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.100572 1128583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 14:21:59.100624 1128583 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.100684 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.125681 1128583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 14:21:59.125740 1128583 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.125799 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.138461 1128583 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 14:21:59.138521 1128583 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.138579 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149655 1128583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 14:21:59.149697 1128583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.149763 1128583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149803 1128583 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.149831 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.149839 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149790 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149875 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.231815 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.231851 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 14:21:59.231959 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:21:59.232052 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.232060 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.232064 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.232148 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.317997 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 14:21:59.318029 1128583 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318083 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318116 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318158 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318213 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318240 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.318246 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 14:21:59.318252 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318281 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 14:21:59.318315 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.364549 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.703771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.200048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:02.313802 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.812944 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:03.246360 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.928017963s)
	I0318 14:22:03.246414 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246364 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.928251379s)
	I0318 14:22:03.246429 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 14:22:03.246439 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.92820974s)
	I0318 14:22:03.246454 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246468 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246415 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.928141711s)
	I0318 14:22:03.246512 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246515 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246516 1128583 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.88192635s)
	I0318 14:22:03.246587 1128583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 14:22:03.246641 1128583 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:03.246704 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.203222 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.699887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.813730 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.311491 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.317600 1128583 ssh_runner.go:235] Completed: which crictl: (3.070863461s)
	I0318 14:22:06.317700 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:06.317775 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.071235517s)
	I0318 14:22:06.317805 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 14:22:06.317837 1128583 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.317907 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.370328 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 14:22:06.370435 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.243401402s)
	I0318 14:22:08.613903 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.295918452s)
	I0318 14:22:08.613917 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 14:22:08.613941 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:08.613994 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.199978 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.200394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.312752 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:13.812826 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.076840 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462814214s)
	I0318 14:22:11.076881 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 14:22:11.076917 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:11.076968 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:13.332851 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.25584847s)
	I0318 14:22:13.332896 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 14:22:13.332932 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:13.333002 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:14.705785 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.372744893s)
	I0318 14:22:14.705843 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 14:22:14.705881 1128583 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:14.705945 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:15.467380 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 14:22:15.467432 1128583 cache_images.go:123] Successfully loaded all cached images
	I0318 14:22:15.467439 1128583 cache_images.go:92] duration metric: took 16.702522125s to LoadCachedImages
	I0318 14:22:15.467456 1128583 kubeadm.go:928] updating node { 192.168.61.40 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:22:15.467619 1128583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-188109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:22:15.467790 1128583 ssh_runner.go:195] Run: crio config
	I0318 14:22:15.520678 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:15.520705 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:15.520718 1128583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:22:15.520740 1128583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.40 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188109 NodeName:no-preload-188109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:22:15.520893 1128583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188109"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:22:15.520965 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:22:15.534187 1128583 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:22:15.534260 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:22:15.546509 1128583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 14:22:15.567029 1128583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:22:15.586866 1128583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 14:22:15.609161 1128583 ssh_runner.go:195] Run: grep 192.168.61.40	control-plane.minikube.internal$ /etc/hosts
	I0318 14:22:15.614800 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:22:15.630088 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:22:15.754729 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:22:15.774062 1128583 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109 for IP: 192.168.61.40
	I0318 14:22:15.774093 1128583 certs.go:194] generating shared ca certs ...
	I0318 14:22:15.774114 1128583 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:22:15.774374 1128583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:22:15.774434 1128583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:22:15.774448 1128583 certs.go:256] generating profile certs ...
	I0318 14:22:15.774537 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/client.key
	I0318 14:22:15.774607 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key.8d4024a9
	I0318 14:22:15.774652 1128583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key
	I0318 14:22:15.774833 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:22:15.774871 1128583 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:22:15.774882 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:22:15.774926 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:22:15.774972 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:22:15.775031 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:22:15.775106 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:22:15.775902 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.698561 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:14.199200 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.200026 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.312392 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:18.812463 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:15.821418 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:22:15.874044 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:22:15.910814 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:22:15.965889 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:22:16.001003 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:22:16.030033 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:22:16.060519 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:22:16.089952 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:22:16.119397 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:22:16.150036 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:22:16.179489 1128583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:22:16.201823 1128583 ssh_runner.go:195] Run: openssl version
	I0318 14:22:16.208496 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:22:16.222723 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228161 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228239 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.234994 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:22:16.248672 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:22:16.262626 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268255 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268361 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.274868 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:22:16.287251 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:22:16.299690 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304633 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304718 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.311230 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:22:16.325483 1128583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:22:16.331012 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:22:16.338731 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:22:16.346289 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:22:16.353403 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:22:16.359967 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:22:16.367151 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:22:16.373719 1128583 kubeadm.go:391] StartCluster: {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:22:16.373823 1128583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:22:16.373921 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.417874 1128583 cri.go:89] found id: ""
	I0318 14:22:16.417957 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:22:16.431026 1128583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:22:16.431057 1128583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:22:16.431065 1128583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:22:16.431125 1128583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:22:16.445445 1128583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:22:16.446576 1128583 kubeconfig.go:125] found "no-preload-188109" server: "https://192.168.61.40:8443"
	I0318 14:22:16.449104 1128583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:22:16.461001 1128583 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.40
	I0318 14:22:16.461042 1128583 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:22:16.461056 1128583 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:22:16.461104 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.502356 1128583 cri.go:89] found id: ""
	I0318 14:22:16.502437 1128583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:22:16.525636 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:22:16.538600 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:22:16.538626 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:22:16.538677 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:22:16.550720 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:22:16.550803 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:22:16.562585 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:22:16.573439 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:22:16.573502 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:22:16.585548 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.596619 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:22:16.596706 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.608458 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:22:16.619498 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:22:16.619587 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:22:16.631359 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:22:16.643420 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:16.765437 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:17.862932 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097434993s)
	I0318 14:22:17.862980 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.097197 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.168390 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.295118 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:22:18.295225 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.795897 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.295431 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.335088 1128583 api_server.go:72] duration metric: took 1.039967082s to wait for apiserver process to appear ...
	I0318 14:22:19.335128 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:22:19.335163 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:19.335912 1128583 api_server.go:269] stopped: https://192.168.61.40:8443/healthz: Get "https://192.168.61.40:8443/healthz": dial tcp 192.168.61.40:8443: connect: connection refused
	I0318 14:22:19.836266 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.699537 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:21.199910 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:22.338349 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.338383 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.338402 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.351154 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.351190 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.835446 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.841044 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:22.841092 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.335665 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.347092 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.347126 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.835731 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.840517 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.840559 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:24.336151 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:24.340981 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:22:24.354524 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:22:24.354560 1128583 api_server.go:131] duration metric: took 5.019424083s to wait for apiserver health ...
	I0318 14:22:24.354570 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:24.354576 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:24.356602 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:22:20.818751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:23.312003 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:24.358089 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:22:24.375159 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:22:24.426409 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:22:24.452289 1128583 system_pods.go:59] 8 kube-system pods found
	I0318 14:22:24.452326 1128583 system_pods.go:61] "coredns-76f75df574-cksb5" [9cd14e15-7b0f-4978-b667-cba1a54db074] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:22:24.452333 1128583 system_pods.go:61] "etcd-no-preload-188109" [fa7d3ae7-2ac1-4275-8739-686c2e3b7569] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:22:24.452345 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [135ee544-ca83-41ab-9cb2-070587eb3b77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:22:24.452351 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [fd91846b-6210-4cab-ae0f-5e942b4f596e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:22:24.452361 1128583 system_pods.go:61] "kube-proxy-k5kcr" [a1649d3a-9063-49c3-a8a5-04879eee108b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:22:24.452367 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [5bbb4165-ca8f-4807-ad01-bb35c56b6aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:22:24.452375 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-6pn6n" [004af8d8-fa8c-475c-9604-ed98ccceb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:22:24.452390 1128583 system_pods.go:61] "storage-provisioner" [45cae6ca-e3ad-4f7e-9d10-96e091160e4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:22:24.452404 1128583 system_pods.go:74] duration metric: took 25.960889ms to wait for pod list to return data ...
	I0318 14:22:24.452417 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:22:24.456337 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:22:24.456367 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:22:24.456404 1128583 node_conditions.go:105] duration metric: took 3.980296ms to run NodePressure ...
	I0318 14:22:24.456424 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:24.738808 1128583 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743864 1128583 kubeadm.go:733] kubelet initialised
	I0318 14:22:24.743893 1128583 kubeadm.go:734] duration metric: took 5.054661ms waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743905 1128583 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:22:24.749832 1128583 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.700193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.198261 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:25.810553 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:27.811576 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.310813 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.757033 1128583 pod_ready.go:102] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:28.757522 1128583 pod_ready.go:92] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:28.757562 1128583 pod_ready.go:81] duration metric: took 4.007696709s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:28.757576 1128583 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:30.767877 1128583 pod_ready.go:102] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.199688 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.199994 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:32.311356 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.311807 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.265717 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:31.265745 1128583 pod_ready.go:81] duration metric: took 2.508162139s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:31.265755 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:33.273718 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:35.275477 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.200137 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.698589 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:36.812617 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.312289 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:37.774164 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.273935 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.273990 1128583 pod_ready.go:81] duration metric: took 8.008204942s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.274005 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280284 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.280313 1128583 pod_ready.go:81] duration metric: took 6.300519ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280324 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286027 1128583 pod_ready.go:92] pod "kube-proxy-k5kcr" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.286052 1128583 pod_ready.go:81] duration metric: took 5.721757ms for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286061 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292404 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.292450 1128583 pod_ready.go:81] duration metric: took 6.381121ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292462 1128583 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.699760 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.198691 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.199259 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.812494 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:44.312890 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.300167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:43.803022 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.199771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:45.698884 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.811124 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.827580 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.300768 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.300891 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.800442 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:47.699312 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.199049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:51.312163 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.812711 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:52.800573 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:54.801034 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:52.698863 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:55.200175 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.311249 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:58.312362 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:57.300207 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.300788 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:57.698375 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.706517 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:00.814865 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:03.312810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:01.800156 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.302577 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:02.198461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.199002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.199307 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:05.811934 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:08.312176 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:10.312721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.800938 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:09.302833 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:08.699862 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.199072 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.315288 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.813053 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.800023 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.300632 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:13.199539 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:15.700968 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:17.311266 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:19.311540 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:16.800323 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.801497 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:18.198887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:20.698596 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.312215 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.810513 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.299039 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.300143 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.798699 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.699705 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.200662 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.811810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.813013 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.311457 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.800554 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.304272 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:27.700002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.200376 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.311860 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.313084 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.802042 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:35.299530 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:32.212811 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.700061 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:36.811811 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.312768 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.300434 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.300586 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.199672 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.698826 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.810759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.812721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.800736 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:41.707557 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.199509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.311703 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.811077 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.301804 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.800280 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.801802 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.698786 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:49.198083 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:51.198509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.812029 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.311681 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.300917 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.301653 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.699341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.699481 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.810495 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.810917 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:59.812265 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.800712 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.300089 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:58.198656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.699394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.311601 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.311669 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.300584 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.300628 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:03.198564 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:05.198935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.811819 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.812462 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.300681 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.300912 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.799297 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:07.699433 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.198062 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.311072 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:13.311533 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:15.313064 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:12.799619 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.800637 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:12.201234 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.698988 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.811663 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.812068 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:16.801294 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.301959 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.199048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.200040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:22.312401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.812678 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.799684 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.301211 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.699674 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.199382 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.312672 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.315087 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:26.800594 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.299763 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:26.698769 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:28.700212 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.200571 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.811482 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.812980 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.299818 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.300656 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.798808 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:33.698578 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.698656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.814759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:38.311080 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:40.312503 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:37.800024 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.801292 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.698736 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.699014 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.813028 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.814259 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.300121 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.802581 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:42.198235 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.199088 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.312598 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.814542 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.300765 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.801469 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.699746 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.198789 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:51.199313 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.311753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.311952 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.301766 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.199935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:55.699461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.811981 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.814200 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.799464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.800623 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.198049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:00.199613 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.312698 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.811687 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.299462 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.300239 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:05.301621 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.200341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:04.698040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:06.312239 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.811427 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:07.800597 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:10.300964 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:06.700315 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:09.198241 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.198296 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.312458 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:13.312602 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:12.799474 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:14.800216 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:13.199314 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.199666 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.812120 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.813219 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.814195 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:16.800432 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.299589 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.698783 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.698980 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.804733 1128788 pod_ready.go:81] duration metric: took 4m0.000568505s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:21.804764 1128788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:21.804783 1128788 pod_ready.go:38] duration metric: took 4m13.068724908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:21.804834 1128788 kubeadm.go:591] duration metric: took 4m21.284795634s to restartPrimaryControlPlane
	W0318 14:25:21.804919 1128788 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:21.804954 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:21.300889 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:23.800547 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:25.803188 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:21.699436 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:24.199193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:28.300495 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:30.300870 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:26.700384 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:29.203012 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:32.800506 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:35.299464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:31.698372 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.699450 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.199443 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:37.299695 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:39.800741 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:38.199879 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:40.698620 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:41.801098 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:44.301379 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:41.692077 1128964 pod_ready.go:81] duration metric: took 4m0.00104s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:41.692109 1128964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:41.692136 1128964 pod_ready.go:38] duration metric: took 4m13.711186182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:41.692170 1128964 kubeadm.go:591] duration metric: took 4m21.341445822s to restartPrimaryControlPlane
	W0318 14:25:41.692279 1128964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:41.692345 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:46.800687 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:49.300012 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:54.173048 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.368065771s)
	I0318 14:25:54.173145 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:54.192139 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:54.204909 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:54.217096 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:54.217126 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:54.217182 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:54.227905 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:54.228009 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:54.239854 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:54.250668 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:54.250744 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:54.263509 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.274202 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:54.274265 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.285342 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:54.296064 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:54.296157 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:54.307985 1128788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:54.371118 1128788 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:25:54.371202 1128788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:54.551187 1128788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:54.551377 1128788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:54.551551 1128788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:54.780034 1128788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:54.782426 1128788 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:54.782545 1128788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:54.782650 1128788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:54.782735 1128788 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:54.782829 1128788 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:54.782930 1128788 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:54.783213 1128788 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:54.783717 1128788 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:54.784390 1128788 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:54.784849 1128788 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:54.785263 1128788 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:54.785725 1128788 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:54.785826 1128788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:55.130998 1128788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:55.387076 1128788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:55.517240 1128788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:51.300209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:53.303010 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.800703 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.906565 1128788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:55.907198 1128788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:55.909674 1128788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:55.912196 1128788 out.go:204]   - Booting up control plane ...
	I0318 14:25:55.912323 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:55.912407 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:55.912494 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:55.932596 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:55.935171 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:55.935520 1128788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:56.083395 1128788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:58.300288 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:00.800291 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:02.086878 1128788 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002842 seconds
	I0318 14:26:02.087052 1128788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:02.102499 1128788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:02.637889 1128788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:02.638152 1128788 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-767719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:03.157386 1128788 kubeadm.go:309] [bootstrap-token] Using token: do2whq.efhsaljmpmqgv9gj
	I0318 14:26:03.159248 1128788 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:03.159429 1128788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:03.167328 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:03.180628 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:03.185253 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:03.190014 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:03.202714 1128788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:03.223282 1128788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:03.504303 1128788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:03.614837 1128788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:03.614872 1128788 kubeadm.go:309] 
	I0318 14:26:03.614978 1128788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:03.615004 1128788 kubeadm.go:309] 
	I0318 14:26:03.615107 1128788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:03.615117 1128788 kubeadm.go:309] 
	I0318 14:26:03.615149 1128788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:03.615219 1128788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:03.615285 1128788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:03.615293 1128788 kubeadm.go:309] 
	I0318 14:26:03.615354 1128788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:03.615365 1128788 kubeadm.go:309] 
	I0318 14:26:03.615421 1128788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:03.615430 1128788 kubeadm.go:309] 
	I0318 14:26:03.615486 1128788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:03.615578 1128788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:03.615669 1128788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:03.615679 1128788 kubeadm.go:309] 
	I0318 14:26:03.615778 1128788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:03.615887 1128788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:03.615897 1128788 kubeadm.go:309] 
	I0318 14:26:03.615998 1128788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616120 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:03.616149 1128788 kubeadm.go:309] 	--control-plane 
	I0318 14:26:03.616159 1128788 kubeadm.go:309] 
	I0318 14:26:03.616266 1128788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:03.616276 1128788 kubeadm.go:309] 
	I0318 14:26:03.616371 1128788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616500 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:03.617330 1128788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:03.617374 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:26:03.617384 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:03.619394 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:03.620836 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:03.665582 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:03.812834 1128788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:03.812897 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:03.812943 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-767719 minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=embed-certs-767719 minikube.k8s.io/primary=true
	I0318 14:26:03.899419 1128788 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:04.104407 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:04.604499 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.104532 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.605047 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:02.800707 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:04.802167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:06.105187 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:06.604462 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.104411 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.605096 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.104448 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.604430 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.104707 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.605130 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.104955 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.605165 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.300575 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:09.798776 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:11.104436 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.605273 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.104851 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.604819 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.104669 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.605089 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.105486 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.604568 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.104455 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.604422 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.799935 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:13.800907 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:15.801754 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:16.105107 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:16.205506 1128788 kubeadm.go:1107] duration metric: took 12.39266353s to wait for elevateKubeSystemPrivileges
	W0318 14:26:16.205558 1128788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:16.205570 1128788 kubeadm.go:393] duration metric: took 5m15.738081871s to StartCluster
	I0318 14:26:16.205599 1128788 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.205720 1128788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:16.208645 1128788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.209157 1128788 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:16.210915 1128788 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:16.209206 1128788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:16.209401 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:16.212258 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:16.212275 1128788 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767719"
	I0318 14:26:16.212351 1128788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767719"
	I0318 14:26:16.212260 1128788 addons.go:69] Setting metrics-server=true in profile "embed-certs-767719"
	I0318 14:26:16.212415 1128788 addons.go:234] Setting addon metrics-server=true in "embed-certs-767719"
	W0318 14:26:16.212431 1128788 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:16.212469 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212260 1128788 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767719"
	I0318 14:26:16.212512 1128788 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767719"
	W0318 14:26:16.212527 1128788 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:16.212560 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.212983 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213003 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213028 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.213040 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.231532 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0318 14:26:16.231543 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0318 14:26:16.232128 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0318 14:26:16.232280 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232284 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232882 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.232907 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.232922 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.233258 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233284 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.233360 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.233479 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233501 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.235956 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236151 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236372 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236411 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.236545 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236568 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.240163 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.244336 1128788 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767719"
	W0318 14:26:16.244370 1128788 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:16.244407 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.244845 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.244894 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.257940 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0318 14:26:16.258701 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.259359 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.259386 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.259769 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.260030 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.262272 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.262286 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0318 14:26:16.264459 1128788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:16.262834 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.265430 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0318 14:26:16.266198 1128788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.266220 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:16.266240 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.266482 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.266663 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.266676 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267253 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.267277 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267753 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.268456 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.268605 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.269068 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.269098 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.269804 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270398 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.270420 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270711 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.270989 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.271183 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.271362 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.271984 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.273854 1128788 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:14.305258 1128964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.612890386s)
	I0318 14:26:14.305324 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:14.325572 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:26:14.337875 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:26:14.350490 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:26:14.350530 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:26:14.350592 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:26:14.361521 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:26:14.361612 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:26:14.372767 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:26:14.383545 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:26:14.383614 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:26:14.394057 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.404187 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:26:14.404261 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.415029 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:26:14.425738 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:26:14.425820 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:26:14.436847 1128964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:26:14.674909 1128964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:16.275278 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:16.275298 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:16.275323 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.278500 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.278909 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.278939 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.279230 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.279437 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.279612 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.279748 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.286716 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0318 14:26:16.287176 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.287651 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.287678 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.288057 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.288248 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.290084 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.290359 1128788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.290381 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:16.290404 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.293253 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293662 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.293688 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293886 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.294078 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.294241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.294398 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.460832 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:16.537089 1128788 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550362 1128788 node_ready.go:49] node "embed-certs-767719" has status "Ready":"True"
	I0318 14:26:16.550391 1128788 node_ready.go:38] duration metric: took 13.195546ms for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550405 1128788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:16.557745 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:16.638531 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:16.638565 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:16.664638 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.762661 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:16.762713 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:16.792712 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.859169 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:16.859200 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:16.954827 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:18.103559 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.103592 1128788 pod_ready.go:81] duration metric: took 1.545818643s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.103606 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.256039 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591350359s)
	I0318 14:26:18.256112 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256129 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256483 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256513 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256530 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256528 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.256541 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256918 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256936 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256950 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.264761 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.264788 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.265133 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.265164 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.265193 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.652953 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.653088 1128788 pod_ready.go:81] duration metric: took 549.466665ms for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.653124 1128788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674506 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.674553 1128788 pod_ready.go:81] duration metric: took 21.386005ms for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674568 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.680422 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.887663901s)
	I0318 14:26:18.680486 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680498 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.680875 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.680887 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.680903 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.680921 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680928 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.681198 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.681199 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.681277 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.711919 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.711954 1128788 pod_ready.go:81] duration metric: took 37.376915ms for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.711968 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730096 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.730129 1128788 pod_ready.go:81] duration metric: took 18.151839ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730145 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.756000 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.801120989s)
	I0318 14:26:18.756076 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756091 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756416 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756435 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756445 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756452 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756849 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756883 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756895 1128788 addons.go:470] Verifying addon metrics-server=true in "embed-certs-767719"
	I0318 14:26:18.756917 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.759019 1128788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 14:26:18.760442 1128788 addons.go:505] duration metric: took 2.551236037s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 14:26:18.942164 1128788 pod_ready.go:92] pod "kube-proxy-f4547" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.942196 1128788 pod_ready.go:81] duration metric: took 212.040337ms for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.942205 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341772 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:19.341808 1128788 pod_ready.go:81] duration metric: took 399.594033ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341820 1128788 pod_ready.go:38] duration metric: took 2.791403027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:19.341841 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:19.341921 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:19.362110 1128788 api_server.go:72] duration metric: took 3.152894755s to wait for apiserver process to appear ...
	I0318 14:26:19.362150 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:19.362209 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:26:19.368138 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:26:19.369583 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:19.369608 1128788 api_server.go:131] duration metric: took 7.450993ms to wait for apiserver health ...
	I0318 14:26:19.369617 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:19.545388 1128788 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:19.545423 1128788 system_pods.go:61] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.545428 1128788 system_pods.go:61] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.545431 1128788 system_pods.go:61] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.545434 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.545438 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.545441 1128788 system_pods.go:61] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.545443 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.545449 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.545455 1128788 system_pods.go:61] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.545464 1128788 system_pods.go:74] duration metric: took 175.840386ms to wait for pod list to return data ...
	I0318 14:26:19.545473 1128788 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:19.741364 1128788 default_sa.go:45] found service account: "default"
	I0318 14:26:19.741405 1128788 default_sa.go:55] duration metric: took 195.920075ms for default service account to be created ...
	I0318 14:26:19.741424 1128788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:19.945000 1128788 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:19.945039 1128788 system_pods.go:89] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.945047 1128788 system_pods.go:89] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.945053 1128788 system_pods.go:89] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.945060 1128788 system_pods.go:89] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.945066 1128788 system_pods.go:89] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.945070 1128788 system_pods.go:89] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.945076 1128788 system_pods.go:89] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.945087 1128788 system_pods.go:89] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.945097 1128788 system_pods.go:89] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.945110 1128788 system_pods.go:126] duration metric: took 203.67742ms to wait for k8s-apps to be running ...
	I0318 14:26:19.945122 1128788 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:19.945188 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:19.987286 1128788 system_svc.go:56] duration metric: took 42.149434ms WaitForService to wait for kubelet
	I0318 14:26:19.987328 1128788 kubeadm.go:576] duration metric: took 3.778120092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:19.987361 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:20.141763 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:20.141803 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:20.141822 1128788 node_conditions.go:105] duration metric: took 154.45408ms to run NodePressure ...
	I0318 14:26:20.141840 1128788 start.go:240] waiting for startup goroutines ...
	I0318 14:26:20.141851 1128788 start.go:245] waiting for cluster config update ...
	I0318 14:26:20.141867 1128788 start.go:254] writing updated cluster config ...
	I0318 14:26:20.142268 1128788 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:20.206832 1128788 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:20.209057 1128788 out.go:177] * Done! kubectl is now configured to use "embed-certs-767719" cluster and "default" namespace by default
	I0318 14:26:18.302228 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:20.799704 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.444912 1128964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:26:23.444993 1128964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:26:23.445098 1128964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:26:23.445212 1128964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:26:23.445359 1128964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:26:23.445461 1128964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:26:23.446790 1128964 out.go:204]   - Generating certificates and keys ...
	I0318 14:26:23.446904 1128964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:26:23.446986 1128964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:26:23.447102 1128964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:26:23.447194 1128964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:26:23.447309 1128964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:26:23.447376 1128964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:26:23.447453 1128964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:26:23.447529 1128964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:26:23.447607 1128964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:26:23.447693 1128964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:26:23.447741 1128964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:26:23.447856 1128964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:26:23.447937 1128964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:26:23.448019 1128964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:26:23.448121 1128964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:26:23.448194 1128964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:26:23.448311 1128964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:26:23.448422 1128964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:26:23.450038 1128964 out.go:204]   - Booting up control plane ...
	I0318 14:26:23.450174 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:26:23.450282 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:26:23.450371 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:26:23.450509 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:26:23.450633 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:26:23.450671 1128964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:26:23.450818 1128964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:23.450887 1128964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.005932 seconds
	I0318 14:26:23.450974 1128964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:23.451093 1128964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:23.451143 1128964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:23.451340 1128964 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-075922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:23.451414 1128964 kubeadm.go:309] [bootstrap-token] Using token: k51w96.h8xduusjdfbez3gf
	I0318 14:26:23.452848 1128964 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:23.452964 1128964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:23.453073 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:23.453269 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:23.453499 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:23.453664 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:23.453785 1128964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:23.453940 1128964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:23.454005 1128964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:23.454074 1128964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:23.454084 1128964 kubeadm.go:309] 
	I0318 14:26:23.454172 1128964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:23.454186 1128964 kubeadm.go:309] 
	I0318 14:26:23.454288 1128964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:23.454298 1128964 kubeadm.go:309] 
	I0318 14:26:23.454335 1128964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:23.454412 1128964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:23.454475 1128964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:23.454484 1128964 kubeadm.go:309] 
	I0318 14:26:23.454528 1128964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:23.454538 1128964 kubeadm.go:309] 
	I0318 14:26:23.454592 1128964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:23.454599 1128964 kubeadm.go:309] 
	I0318 14:26:23.454681 1128964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:23.454804 1128964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:23.454907 1128964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:23.454919 1128964 kubeadm.go:309] 
	I0318 14:26:23.455027 1128964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:23.455146 1128964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:23.455157 1128964 kubeadm.go:309] 
	I0318 14:26:23.455264 1128964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455401 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:23.455433 1128964 kubeadm.go:309] 	--control-plane 
	I0318 14:26:23.455441 1128964 kubeadm.go:309] 
	I0318 14:26:23.455551 1128964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:23.455560 1128964 kubeadm.go:309] 
	I0318 14:26:23.455666 1128964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455814 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:23.455838 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:26:23.455849 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:23.457678 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:22.801209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:25.305096 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.459285 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:23.475803 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:23.515652 1128964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-075922 minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=default-k8s-diff-port-075922 minikube.k8s.io/primary=true
	I0318 14:26:23.796828 1128964 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:23.796947 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.296970 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.797728 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.297564 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.797144 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:26.297056 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.800960 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:29.802967 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:26.798004 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.297935 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.797550 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.297031 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.797624 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.297549 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.797256 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.297964 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.797927 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:31.297742 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.300787 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:34.800941 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:31.797040 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.297155 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.797371 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.297809 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.797723 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.297045 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.797008 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.297030 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.797767 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.895914 1128964 kubeadm.go:1107] duration metric: took 12.380212538s to wait for elevateKubeSystemPrivileges
	W0318 14:26:35.895975 1128964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:35.895987 1128964 kubeadm.go:393] duration metric: took 5m15.606276512s to StartCluster
	I0318 14:26:35.896013 1128964 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.896123 1128964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:35.898023 1128964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.898324 1128964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:35.900235 1128964 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:35.898415 1128964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:35.898550 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:35.901588 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:35.901599 1128964 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901617 1128964 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901640 1128964 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901650 1128964 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:35.901665 1128964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-075922"
	I0318 14:26:35.901588 1128964 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901698 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.901723 1128964 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901735 1128964 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:35.901764 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.902055 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902088 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902097 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902126 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902130 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902169 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.919538 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0318 14:26:35.920140 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.920836 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.920864 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.921282 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.921940 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.921983 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.923313 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0318 14:26:35.923321 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0318 14:26:35.923742 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.923792 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.924263 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924280 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924381 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924395 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924710 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924733 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924893 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.925215 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.925235 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.928021 1128964 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.928047 1128964 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:35.928081 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.928422 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.928449 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.941908 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0318 14:26:35.942465 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.943114 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.943146 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.943757 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.943991 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.944493 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 14:26:35.944874 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.945387 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.945404 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.945865 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.945988 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.948302 1128964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:35.946821 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.947744 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 14:26:35.950087 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:35.950110 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:35.950135 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.950181 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.950664 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.951258 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.951295 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.951755 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.952146 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.953842 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954331 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.954353 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954360 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.954563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.956253 1128964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:35.954739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:35.956487 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.957743 1128964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:35.957764 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:35.957783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.957864 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.960451 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.960896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.960929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.961107 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.961281 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.961435 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.961565 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.968795 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0318 14:26:35.969191 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.969631 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.969646 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.969955 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.970117 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.971799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.972169 1128964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:35.972188 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:35.972206 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.974906 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975268 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.975301 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975551 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.975767 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.975958 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.976137 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:36.122420 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:36.139655 1128964 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160857 1128964 node_ready.go:49] node "default-k8s-diff-port-075922" has status "Ready":"True"
	I0318 14:26:36.160883 1128964 node_ready.go:38] duration metric: took 21.193343ms for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160893 1128964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:36.176832 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:36.240357 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:36.240385 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:36.261620 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:36.279644 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:36.294510 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:36.294546 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:36.374231 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:36.376166 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:36.419045 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:38.032072 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752379015s)
	I0318 14:26:38.032148 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032161 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032374 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770714521s)
	I0318 14:26:38.032416 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032427 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032623 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032652 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032660 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032683 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032796 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032814 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032817 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032835 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032848 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.033046 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033107 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033173 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.033149 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033259 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033284 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.112866 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.112896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.113337 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.113362 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.113384 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176199 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.757085355s)
	I0318 14:26:38.176281 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176302 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176669 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176683 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176697 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176707 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176716 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176955 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176969 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176980 1128964 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-075922"
	I0318 14:26:38.178714 1128964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:26:37.300219 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:39.293136 1128583 pod_ready.go:81] duration metric: took 4m0.000606722s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	E0318 14:26:39.293173 1128583 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:26:39.293203 1128583 pod_ready.go:38] duration metric: took 4m14.549283732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:39.293239 1128583 kubeadm.go:591] duration metric: took 4m22.862167815s to restartPrimaryControlPlane
	W0318 14:26:39.293320 1128583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:26:39.293362 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:38.180451 1128964 addons.go:505] duration metric: took 2.282033093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:26:38.194239 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:40.186091 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.186125 1128964 pod_ready.go:81] duration metric: took 4.009253844s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.186139 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193026 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.193059 1128964 pod_ready.go:81] duration metric: took 6.912513ms for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193069 1128964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199244 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.199272 1128964 pod_ready.go:81] duration metric: took 6.195834ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199283 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.204991 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.205019 1128964 pod_ready.go:81] duration metric: took 5.728459ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.205034 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214706 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.214730 1128964 pod_ready.go:81] duration metric: took 9.687528ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214739 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.581970 1128964 pod_ready.go:92] pod "kube-proxy-bzwvf" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.582045 1128964 pod_ready.go:81] duration metric: took 367.297496ms for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.582059 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981562 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.981592 1128964 pod_ready.go:81] duration metric: took 399.525488ms for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981601 1128964 pod_ready.go:38] duration metric: took 4.820697544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:40.981618 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:40.981676 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:40.998626 1128964 api_server.go:72] duration metric: took 5.100242538s to wait for apiserver process to appear ...
	I0318 14:26:40.998672 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:40.998703 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:26:41.010986 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:26:41.012714 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:41.012742 1128964 api_server.go:131] duration metric: took 14.061953ms to wait for apiserver health ...
	I0318 14:26:41.012750 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:41.186873 1128964 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:41.186910 1128964 system_pods.go:61] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.186917 1128964 system_pods.go:61] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.186922 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.186935 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.186943 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.186948 1128964 system_pods.go:61] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.186953 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.187013 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.187029 1128964 system_pods.go:61] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.187041 1128964 system_pods.go:74] duration metric: took 174.283401ms to wait for pod list to return data ...
	I0318 14:26:41.187054 1128964 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:41.381195 1128964 default_sa.go:45] found service account: "default"
	I0318 14:26:41.381238 1128964 default_sa.go:55] duration metric: took 194.17219ms for default service account to be created ...
	I0318 14:26:41.381252 1128964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:41.584896 1128964 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:41.584934 1128964 system_pods.go:89] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.584940 1128964 system_pods.go:89] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.584945 1128964 system_pods.go:89] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.584952 1128964 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.584957 1128964 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.584961 1128964 system_pods.go:89] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.584965 1128964 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.584974 1128964 system_pods.go:89] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.584980 1128964 system_pods.go:89] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.584996 1128964 system_pods.go:126] duration metric: took 203.730421ms to wait for k8s-apps to be running ...
	I0318 14:26:41.585011 1128964 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:41.585065 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:41.602211 1128964 system_svc.go:56] duration metric: took 17.185915ms WaitForService to wait for kubelet
	I0318 14:26:41.602253 1128964 kubeadm.go:576] duration metric: took 5.703881545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:41.602283 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:41.781292 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:41.781321 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:41.781333 1128964 node_conditions.go:105] duration metric: took 179.044515ms to run NodePressure ...
	I0318 14:26:41.781345 1128964 start.go:240] waiting for startup goroutines ...
	I0318 14:26:41.781352 1128964 start.go:245] waiting for cluster config update ...
	I0318 14:26:41.781363 1128964 start.go:254] writing updated cluster config ...
	I0318 14:26:41.781670 1128964 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:41.845950 1128964 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:41.848522 1128964 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-075922" cluster and "default" namespace by default
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:11.668940 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.375539998s)
	I0318 14:27:11.669036 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:11.687767 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:27:11.699135 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:11.710896 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:11.710924 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:11.710971 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:11.721562 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:11.721638 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:11.733335 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:11.744643 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:11.744724 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:11.755801 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.766424 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:11.766515 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.777734 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:11.788887 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:11.788972 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:11.800792 1128583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:11.858933 1128583 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:27:11.859030 1128583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:27:12.029485 1128583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:27:12.029703 1128583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:27:12.029833 1128583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:27:12.279174 1128583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:27:12.281285 1128583 out.go:204]   - Generating certificates and keys ...
	I0318 14:27:12.281400 1128583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:27:12.281507 1128583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:27:12.281633 1128583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:27:12.281726 1128583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:27:12.281844 1128583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:27:12.281938 1128583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:27:12.282031 1128583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:27:12.282121 1128583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:27:12.282218 1128583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:27:12.282325 1128583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:27:12.282392 1128583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:27:12.282470 1128583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:27:12.605106 1128583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:27:12.950706 1128583 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:27:13.067948 1128583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:27:13.340677 1128583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:27:13.393147 1128583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:27:13.393891 1128583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:27:13.396474 1128583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:27:13.398563 1128583 out.go:204]   - Booting up control plane ...
	I0318 14:27:13.398698 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:27:13.398814 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:27:13.398900 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:27:13.422155 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:27:13.423529 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:27:13.423626 1128583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:27:13.568295 1128583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:27:19.571958 1128583 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003509 seconds
	I0318 14:27:19.587644 1128583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:27:19.607417 1128583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:27:20.153253 1128583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:27:20.153526 1128583 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-188109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:27:20.671613 1128583 kubeadm.go:309] [bootstrap-token] Using token: oq5d1l.24j9td8ex727h998
	I0318 14:27:20.673250 1128583 out.go:204]   - Configuring RBAC rules ...
	I0318 14:27:20.673402 1128583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:27:20.680765 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:27:20.693884 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:27:20.698696 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:27:20.702572 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:27:20.710027 1128583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:27:20.725068 1128583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:27:20.981178 1128583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:27:21.104335 1128583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:27:21.107428 1128583 kubeadm.go:309] 
	I0318 14:27:21.107550 1128583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:27:21.107596 1128583 kubeadm.go:309] 
	I0318 14:27:21.107725 1128583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:27:21.107750 1128583 kubeadm.go:309] 
	I0318 14:27:21.107796 1128583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:27:21.107894 1128583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:27:21.107995 1128583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:27:21.108030 1128583 kubeadm.go:309] 
	I0318 14:27:21.108127 1128583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:27:21.108145 1128583 kubeadm.go:309] 
	I0318 14:27:21.108228 1128583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:27:21.108242 1128583 kubeadm.go:309] 
	I0318 14:27:21.108318 1128583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:27:21.108400 1128583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:27:21.108487 1128583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:27:21.108503 1128583 kubeadm.go:309] 
	I0318 14:27:21.108628 1128583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:27:21.108730 1128583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:27:21.108741 1128583 kubeadm.go:309] 
	I0318 14:27:21.108839 1128583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.108968 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:27:21.109031 1128583 kubeadm.go:309] 	--control-plane 
	I0318 14:27:21.109054 1128583 kubeadm.go:309] 
	I0318 14:27:21.109176 1128583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:27:21.109195 1128583 kubeadm.go:309] 
	I0318 14:27:21.109298 1128583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.109455 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:27:21.114992 1128583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:21.115128 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:27:21.115151 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:27:21.116940 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:27:21.118320 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:27:21.167945 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:27:21.256429 1128583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-188109 minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=no-preload-188109 minikube.k8s.io/primary=true
	I0318 14:27:21.315419 1128583 ops.go:34] apiserver oom_adj: -16
	I0318 14:27:21.530472 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.030814 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.531214 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.030869 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.530677 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.031137 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.531400 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.031455 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.530648 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.031501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.531399 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.031109 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.531261 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.030757 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.531295 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.030505 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.531501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.030996 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.530490 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.030520 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.531340 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.031217 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.531425 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.031231 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.531300 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.678904 1128583 kubeadm.go:1107] duration metric: took 12.422463336s to wait for elevateKubeSystemPrivileges
	W0318 14:27:33.678959 1128583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:27:33.678972 1128583 kubeadm.go:393] duration metric: took 5m17.305262011s to StartCluster
	I0318 14:27:33.678999 1128583 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.679119 1128583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:27:33.681595 1128583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.681893 1128583 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:27:33.683724 1128583 out.go:177] * Verifying Kubernetes components...
	I0318 14:27:33.682059 1128583 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:27:33.682122 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:27:33.685123 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:27:33.685131 1128583 addons.go:69] Setting default-storageclass=true in profile "no-preload-188109"
	I0318 14:27:33.685135 1128583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188109"
	I0318 14:27:33.685139 1128583 addons.go:69] Setting metrics-server=true in profile "no-preload-188109"
	I0318 14:27:33.685165 1128583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188109"
	I0318 14:27:33.685173 1128583 addons.go:234] Setting addon metrics-server=true in "no-preload-188109"
	I0318 14:27:33.685175 1128583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-188109"
	W0318 14:27:33.685182 1128583 addons.go:243] addon metrics-server should already be in state true
	W0318 14:27:33.685185 1128583 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:27:33.685231 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685238 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685573 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685575 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685613 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685617 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685629 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685637 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.703022 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0318 14:27:33.703262 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 14:27:33.703844 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704181 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704628 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704649 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.704715 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704736 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.705213 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705374 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705809 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705863 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.705911 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705987 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.706076 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0318 14:27:33.706558 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.707198 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.707222 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.707699 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.708354 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.712289 1128583 addons.go:234] Setting addon default-storageclass=true in "no-preload-188109"
	W0318 14:27:33.712323 1128583 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:27:33.712364 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.712795 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.712833 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.724381 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0318 14:27:33.724980 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.725587 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.725614 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.726054 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.726363 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.727777 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0318 14:27:33.728182 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.728497 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.730538 1128583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:27:33.729152 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.730851 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0318 14:27:33.732037 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:27:33.732055 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:27:33.732076 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.732113 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.732489 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.732593 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.732881 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.732979 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.732991 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.733604 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.734297 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.734329 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.735399 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.737266 1128583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:27:33.735988 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.736830 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.739081 1128583 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:33.739098 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:27:33.737327 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.739122 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.739142 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.740009 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.740263 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.740482 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.742702 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743181 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.743211 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743473 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.743706 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.743902 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.744097 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.752903 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0318 14:27:33.756275 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.756901 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.756932 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.757363 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.757603 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.759471 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.759732 1128583 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:33.759751 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:27:33.759772 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.762687 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763139 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.763162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763414 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.763599 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.763765 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.763919 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.942490 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:27:33.975796 1128583 node_ready.go:35] waiting up to 6m0s for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008100 1128583 node_ready.go:49] node "no-preload-188109" has status "Ready":"True"
	I0318 14:27:34.008135 1128583 node_ready.go:38] duration metric: took 32.281068ms for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008149 1128583 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:34.039370 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:34.067765 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:27:34.067798 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:27:34.088294 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:34.091931 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:34.121689 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:27:34.121722 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:27:34.183609 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:34.183638 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:27:34.264906 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:35.590900 1128583 pod_ready.go:92] pod "coredns-76f75df574-jk9v5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.590928 1128583 pod_ready.go:81] duration metric: took 1.551526097s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.590938 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605647 1128583 pod_ready.go:92] pod "coredns-76f75df574-xczpc" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.605675 1128583 pod_ready.go:81] duration metric: took 14.730232ms for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605685 1128583 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.613213 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.521243904s)
	I0318 14:27:35.613276 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613289 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613282 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.524948587s)
	I0318 14:27:35.613324 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.613811 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613813 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.613824 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613831 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614119 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614166 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614183 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.614191 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614192 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.614234 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614273 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614502 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614517 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.636576 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.636610 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.636920 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.636946 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.656945 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.656972 1128583 pod_ready.go:81] duration metric: took 51.280554ms for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.656983 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683260 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.683291 1128583 pod_ready.go:81] duration metric: took 26.301625ms for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683301 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.691855 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42688194s)
	I0318 14:27:35.691918 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.691934 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692300 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692325 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692336 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.692344 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692661 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.692701 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692709 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692721 1128583 addons.go:470] Verifying addon metrics-server=true in "no-preload-188109"
	I0318 14:27:35.694758 1128583 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:27:35.696004 1128583 addons.go:505] duration metric: took 2.013954954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:27:35.709010 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.709035 1128583 pod_ready.go:81] duration metric: took 25.726967ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.709045 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982032 1128583 pod_ready.go:92] pod "kube-proxy-qpxx5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.982080 1128583 pod_ready.go:81] duration metric: took 273.026354ms for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982094 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380184 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:36.380228 1128583 pod_ready.go:81] duration metric: took 398.123566ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380241 1128583 pod_ready.go:38] duration metric: took 2.372078145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:36.380264 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:27:36.380334 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:27:36.401316 1128583 api_server.go:72] duration metric: took 2.719374991s to wait for apiserver process to appear ...
	I0318 14:27:36.401358 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:27:36.401389 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:27:36.407212 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:27:36.408930 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:27:36.408966 1128583 api_server.go:131] duration metric: took 7.597771ms to wait for apiserver health ...
	I0318 14:27:36.408989 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:27:36.583053 1128583 system_pods.go:59] 9 kube-system pods found
	I0318 14:27:36.583099 1128583 system_pods.go:61] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.583107 1128583 system_pods.go:61] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.583112 1128583 system_pods.go:61] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.583116 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.583120 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.583123 1128583 system_pods.go:61] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.583127 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.583134 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.583138 1128583 system_pods.go:61] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.583147 1128583 system_pods.go:74] duration metric: took 174.139423ms to wait for pod list to return data ...
	I0318 14:27:36.583156 1128583 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:27:36.779733 1128583 default_sa.go:45] found service account: "default"
	I0318 14:27:36.779771 1128583 default_sa.go:55] duration metric: took 196.607194ms for default service account to be created ...
	I0318 14:27:36.779783 1128583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:27:36.982750 1128583 system_pods.go:86] 9 kube-system pods found
	I0318 14:27:36.982783 1128583 system_pods.go:89] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.982789 1128583 system_pods.go:89] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.982793 1128583 system_pods.go:89] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.982798 1128583 system_pods.go:89] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.982804 1128583 system_pods.go:89] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.982808 1128583 system_pods.go:89] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.982812 1128583 system_pods.go:89] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.982819 1128583 system_pods.go:89] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.982823 1128583 system_pods.go:89] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.982832 1128583 system_pods.go:126] duration metric: took 203.042771ms to wait for k8s-apps to be running ...
	I0318 14:27:36.982839 1128583 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:27:36.982902 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:37.000948 1128583 system_svc.go:56] duration metric: took 18.09435ms WaitForService to wait for kubelet
	I0318 14:27:37.000980 1128583 kubeadm.go:576] duration metric: took 3.319049387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:27:37.001005 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:27:37.180608 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:27:37.180639 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:27:37.180652 1128583 node_conditions.go:105] duration metric: took 179.641912ms to run NodePressure ...
	I0318 14:27:37.180665 1128583 start.go:240] waiting for startup goroutines ...
	I0318 14:27:37.180672 1128583 start.go:245] waiting for cluster config update ...
	I0318 14:27:37.180686 1128583 start.go:254] writing updated cluster config ...
	I0318 14:27:37.181004 1128583 ssh_runner.go:195] Run: rm -f paused
	I0318 14:27:37.236286 1128583 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:27:37.238455 1128583 out.go:177] * Done! kubectl is now configured to use "no-preload-188109" cluster and "default" namespace by default
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.087429010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b9bf8f5-b237-42bf-8072-e8c3794628ff name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.087621779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,PodSandboxId:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771998615366467,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,PodSandboxId:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998504590794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,PodSandboxId:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998436951863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,PodSandboxId:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,
CreatedAt:1710771996669309971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,PodSandboxId:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771977348959285
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,PodSandboxId:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1cc63afee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771977343426375,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,PodSandboxId:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107719772700
79264,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,PodSandboxId:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:17107719
77165175994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b9bf8f5-b237-42bf-8072-e8c3794628ff name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.089098629Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f260ef43-0d3d-40a4-884f-0da6d477470c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.089213849Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710771998802322279,StartedAt:1710771998831157505,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a8954270-a7e4-4584-860f-eea1ffd428c5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a8954270-a7e4-4584-860f-eea1ffd428c5/containers/storage-provisioner/e2385261,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/a8954270-a7e4-4584-860f-eea1ffd428c5/volumes/kubernetes.io~projected/kube-api-access-6mcb8,Readonly:true,SelinuxR
elabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_a8954270-a7e4-4584-860f-eea1ffd428c5/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f260ef43-0d3d-40a4-884f-0da6d477470c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.089778519Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8480d7f0-c32e-4f28-bfbf-fbe95fe0475d name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.090052252Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710771998659573177,StartedAt:1710771998701955780,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/2603cb56-7d34-4a9e-8614-9d4f4610da6d/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2603cb56-7d34-4a9e-8614-9d4f4610da6d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2603cb56-7d34-4a9e-8614-9d4f4610da6d/containers/coredns/87bea723,Readonly:false,SelinuxRelabel:false,P
ropagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/2603cb56-7d34-4a9e-8614-9d4f4610da6d/volumes/kubernetes.io~projected/kube-api-access-w6wx4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-zqnfs_2603cb56-7d34-4a9e-8614-9d4f4610da6d/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8480d7f0-c32e-4f28-bfbf-fbe95fe0475d name=/runtime.v1.RuntimeService/ContainerS
tatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.090658040Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a8759f62-3b1b-45fd-8595-40a8dcf92dd0 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.090824928Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710771998516795396,StartedAt:1710771998609093939,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/207d4899-9bf3-4f4b-ab21-bc35079a0bda/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/207d4899-9bf3-4f4b-ab21-bc35079a0bda/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/207d4899-9bf3-4f4b-ab21-bc35079a0bda/containers/coredns/c27ff714,Readonly:false,SelinuxRelabel:false,
Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/207d4899-9bf3-4f4b-ab21-bc35079a0bda/volumes/kubernetes.io~projected/kube-api-access-ccdzg,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-c8q9g_207d4899-9bf3-4f4b-ab21-bc35079a0bda/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a8759f62-3b1b-45fd-8595-40a8dcf92dd0 name=/runtime.v1.RuntimeService/Container
Status
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.091462474Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7ea36c4f-2f04-496a-866f-b16bfc75639f name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.091565706Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710771996730386918,StartedAt:1710771996772561546,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f52bafde-a25e-4496-a987-42d88c036982/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f52bafde-a25e-4496-a987-42d88c036982/containers/kube-proxy/4e0b63f2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,Hos
tPath:/var/lib/kubelet/pods/f52bafde-a25e-4496-a987-42d88c036982/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/f52bafde-a25e-4496-a987-42d88c036982/volumes/kubernetes.io~projected/kube-api-access-kdqtp,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-bzwvf_f52bafde-a25e-4496-a987-42d88c036982/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" f
ile="otel-collector/interceptors.go:74" id=7ea36c4f-2f04-496a-866f-b16bfc75639f name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.091963752Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=44319df6-3d3e-4ddf-8cc7-d51d7a361b84 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.092046035Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710771977479216497,StartedAt:1710771977596039460,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/25ac09be5cba30be6df0e359c114df82/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/25ac09be5cba30be6df0e359c114df82/containers/kube-scheduler/309210c7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-075922_25ac09be5cba30be6df0e359c114df82/kube-scheduler/2.log,Resources:&ContainerResources{Lin
ux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=44319df6-3d3e-4ddf-8cc7-d51d7a361b84 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.092534495Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6ec67a3d-7f9e-48a0-8c7f-a6ce953464d8 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.092629527Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710771977447006540,StartedAt:1710771977556795802,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a6f5101ea08afdfc89ee317da149610b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a6f5101ea08afdfc89ee317da149610b/containers/kube-apiserver/984f1b3d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapp
ing{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-075922_a6f5101ea08afdfc89ee317da149610b/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6ec67a3d-7f9e-48a0-8c7f-a6ce953464d8 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.093256678Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2f53779f-0578-4b07-aea7-8afb667c1e09 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.093638986Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710771977355514370,StartedAt:1710771977446508304,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1d4b77a9b7b4ea24d3851dcc48a94a25/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1d4b77a9b7b4ea24d3851dcc48a94a25/containers/kube-controller-manager/125140b5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagati
on:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-075922_1d4b77a9b7b4ea24d3851dcc48a94a25/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,Oom
ScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2f53779f-0578-4b07-aea7-8afb667c1e09 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.094402888Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,Verbose:false,}" file="otel-collector/interceptors.go:62" id=95617c81-a2d2-4355-9b8f-2601226ac812 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.094542926Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710771977264250381,StartedAt:1710771977355063501,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/06ebc6541380aebc06e575576f810c42/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/06ebc6541380aebc06e575576f810c42/containers/etcd/fd3d48dd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/p
ods/kube-system_etcd-default-k8s-diff-port-075922_06ebc6541380aebc06e575576f810c42/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=95617c81-a2d2-4355-9b8f-2601226ac812 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.117121653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e741f4df-c531-4cdf-aa61-dc7e5183b140 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.117219636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e741f4df-c531-4cdf-aa61-dc7e5183b140 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.118946250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c5c108c-f8b7-44c5-ac01-bef882137805 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.119392853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772544119367825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c5c108c-f8b7-44c5-ac01-bef882137805 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.120161526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0864213a-c2da-42c9-952e-d523348f1fb1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.120240664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0864213a-c2da-42c9-952e-d523348f1fb1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:35:44 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:35:44.120472821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,PodSandboxId:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771998615366467,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,PodSandboxId:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998504590794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,PodSandboxId:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998436951863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,PodSandboxId:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,
CreatedAt:1710771996669309971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,PodSandboxId:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771977348959285
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,PodSandboxId:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1cc63afee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771977343426375,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,PodSandboxId:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107719772700
79264,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,PodSandboxId:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:17107719
77165175994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0864213a-c2da-42c9-952e-d523348f1fb1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6530368ed5c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   09b0dc6471521       storage-provisioner
	f3afcb1dd7909       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   dfcece09f20d5       coredns-5dd5756b68-zqnfs
	8bd8173e8ddba       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   507494b76833c       coredns-5dd5756b68-c8q9g
	946876f232cf6       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   8125ae53624d5       kube-proxy-bzwvf
	15b995e68f898       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   772252e46b79d       kube-scheduler-default-k8s-diff-port-075922
	c9686e1e42595       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   44bb254690a2d       kube-apiserver-default-k8s-diff-port-075922
	c7a01556a0b32       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   038299a9cef4f       kube-controller-manager-default-k8s-diff-port-075922
	f1f41ad4a31ca       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   a1a210c5eb425       etcd-default-k8s-diff-port-075922
	
	
	==> coredns [8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-075922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-075922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=default-k8s-diff-port-075922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:26:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-075922
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:35:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:31:49 +0000   Mon, 18 Mar 2024 14:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:31:49 +0000   Mon, 18 Mar 2024 14:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:31:49 +0000   Mon, 18 Mar 2024 14:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:31:49 +0000   Mon, 18 Mar 2024 14:26:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.39
	  Hostname:    default-k8s-diff-port-075922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f21705f1d164bb184935d88a5f9583f
	  System UUID:                7f21705f-1d16-4bb1-8493-5d88a5f9583f
	  Boot ID:                    5e71147a-1e7d-42e2-b1a4-98acc5584c15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-c8q9g                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-5dd5756b68-zqnfs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-075922                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-075922             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-075922    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-bzwvf                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-075922             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-7c444                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-075922 event: Registered Node default-k8s-diff-port-075922 in Controller
	
	
	==> dmesg <==
	[  +0.056526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043858] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar18 14:21] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.834930] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.653999] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.402718] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.065333] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086750] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.229015] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.164948] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.327886] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +5.530155] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.068051] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.084030] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +5.702241] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.492212] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 14:26] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.514385] systemd-fstab-generator[3385]: Ignoring "noauto" option for root device
	[  +4.818503] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.483800] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[ +12.880138] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[  +0.117558] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 14:27] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709] <==
	{"level":"info","ts":"2024-03-18T14:26:17.498142Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.39:2380"}
	{"level":"info","ts":"2024-03-18T14:26:17.498245Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"dcb628089222db2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-18T14:26:17.498406Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:26:17.498526Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:26:17.498564Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T14:26:17.499081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 switched to configuration voters=(993996446961380786)"}
	{"level":"info","ts":"2024-03-18T14:26:17.499318Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ee08272957b13977","local-member-id":"dcb628089222db2","added-peer-id":"dcb628089222db2","added-peer-peer-urls":["https://192.168.83.39:2380"]}
	{"level":"info","ts":"2024-03-18T14:26:17.86078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T14:26:17.861098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T14:26:17.86119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 received MsgPreVoteResp from dcb628089222db2 at term 1"}
	{"level":"info","ts":"2024-03-18T14:26:17.861294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T14:26:17.861323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 received MsgVoteResp from dcb628089222db2 at term 2"}
	{"level":"info","ts":"2024-03-18T14:26:17.861431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T14:26:17.861501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dcb628089222db2 elected leader dcb628089222db2 at term 2"}
	{"level":"info","ts":"2024-03-18T14:26:17.866073Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dcb628089222db2","local-member-attributes":"{Name:default-k8s-diff-port-075922 ClientURLs:[https://192.168.83.39:2379]}","request-path":"/0/members/dcb628089222db2/attributes","cluster-id":"ee08272957b13977","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:26:17.866394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:26:17.867776Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T14:26:17.869953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:26:17.870836Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:26:17.870863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:26:17.888756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:26:17.896406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.39:2379"}
	{"level":"info","ts":"2024-03-18T14:26:17.897196Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ee08272957b13977","local-member-id":"dcb628089222db2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:26:17.900915Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:26:17.900972Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 14:35:44 up 14 min,  0 users,  load average: 0.26, 0.31, 0.28
	Linux default-k8s-diff-port-075922 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58] <==
	W0318 14:31:21.173617       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:31:21.173880       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:31:21.173931       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:31:21.173723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:31:21.174081       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:31:21.175497       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:32:20.104761       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:32:21.174757       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:21.174944       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:32:21.174986       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:32:21.176087       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:21.176256       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:32:21.176368       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:33:20.105465       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:34:20.104416       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:34:21.175861       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:34:21.176089       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:34:21.176131       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:34:21.177075       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:34:21.177141       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:34:21.177150       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:35:20.105810       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905] <==
	I0318 14:30:05.656077       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:30:35.109008       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:30:35.664920       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:31:05.115180       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:31:05.674942       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:31:35.123063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:31:35.684325       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:32:05.129506       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:32:05.693173       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:32:29.511540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="657.758µs"
	E0318 14:32:35.135773       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:32:35.702655       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:32:40.507791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="239.439µs"
	E0318 14:33:05.141938       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:33:05.712612       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:33:35.152369       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:33:35.722996       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:34:05.158893       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:34:05.732373       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:34:35.164611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:34:35.741109       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:35:05.169797       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:35:05.749661       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:35:35.175994       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:35:35.758206       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4] <==
	I0318 14:26:36.849843       1 server_others.go:69] "Using iptables proxy"
	I0318 14:26:36.882879       1 node.go:141] Successfully retrieved node IP: 192.168.83.39
	I0318 14:26:37.009449       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 14:26:37.009512       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 14:26:37.023032       1 server_others.go:152] "Using iptables Proxier"
	I0318 14:26:37.023108       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 14:26:37.023289       1 server.go:846] "Version info" version="v1.28.4"
	I0318 14:26:37.023322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:26:37.025099       1 config.go:188] "Starting service config controller"
	I0318 14:26:37.025138       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 14:26:37.025181       1 config.go:97] "Starting endpoint slice config controller"
	I0318 14:26:37.025185       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 14:26:37.025621       1 config.go:315] "Starting node config controller"
	I0318 14:26:37.025656       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 14:26:37.125852       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 14:26:37.125918       1 shared_informer.go:318] Caches are synced for service config
	I0318 14:26:37.143315       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4] <==
	W0318 14:26:20.199880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 14:26:20.200908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 14:26:20.199959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:20.200973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:21.064106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 14:26:21.064245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 14:26:21.073175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 14:26:21.073365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 14:26:21.082271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:21.082370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:21.176763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 14:26:21.176870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 14:26:21.281957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:21.282181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:21.325777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 14:26:21.325899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 14:26:21.331486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 14:26:21.331620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 14:26:21.462627       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 14:26:21.462773       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 14:26:21.484191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 14:26:21.484242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 14:26:21.505249       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:21.505356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 14:26:23.785360       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:33:23 default-k8s-diff-port-075922 kubelet[3712]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:33:23 default-k8s-diff-port-075922 kubelet[3712]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:33:23 default-k8s-diff-port-075922 kubelet[3712]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:33:23 default-k8s-diff-port-075922 kubelet[3712]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:33:27 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:33:27.490834    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:33:42 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:33:42.491841    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:33:55 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:33:55.490389    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:34:10 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:34:10.489383    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:34:23 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:34:23.491499    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:34:23 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:34:23.532497    3712 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:34:23 default-k8s-diff-port-075922 kubelet[3712]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:34:23 default-k8s-diff-port-075922 kubelet[3712]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:34:23 default-k8s-diff-port-075922 kubelet[3712]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:34:23 default-k8s-diff-port-075922 kubelet[3712]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:34:37 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:34:37.490908    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:34:51 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:34:51.489289    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:35:06 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:35:06.489249    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:35:19 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:35:19.489444    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:35:23 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:35:23.540885    3712 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:35:23 default-k8s-diff-port-075922 kubelet[3712]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:35:23 default-k8s-diff-port-075922 kubelet[3712]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:35:23 default-k8s-diff-port-075922 kubelet[3712]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:35:23 default-k8s-diff-port-075922 kubelet[3712]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:35:30 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:35:30.488862    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:35:41 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:35:41.491493    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	
	
	==> storage-provisioner [c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be] <==
	I0318 14:26:38.862392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 14:26:38.877508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 14:26:38.878744       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 14:26:38.894521       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 14:26:38.894931       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-075922_c6196403-3a40-4c95-9e9d-4699fb79f97c!
	I0318 14:26:38.895626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd295b4d-305d-4153-b3e2-0b829e6989d6", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-075922_c6196403-3a40-4c95-9e9d-4699fb79f97c became leader
	I0318 14:26:38.996837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-075922_c6196403-3a40-4c95-9e9d-4699fb79f97c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-7c444
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 describe pod metrics-server-57f55c9bc5-7c444
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-075922 describe pod metrics-server-57f55c9bc5-7c444: exit status 1 (65.685236ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-7c444" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-075922 describe pod metrics-server-57f55c9bc5-7c444: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0318 14:28:10.147414 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:28:18.301835 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:29:00.517811 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:29:17.919042 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 14:29:33.192076 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:29:41.347362 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188109 -n no-preload-188109
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:36:37.832752446 +0000 UTC m=+6715.626211651
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-188109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-188109 logs -n 25: (2.100359754s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo find                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo find                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:17:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:24.228169 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:17:27.300185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:33.380164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:36.452102 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:42.536087 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:45.604211 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:51.684168 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:54.756227 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:00.836108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:03.908246 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:09.988223 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:13.060123 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:19.140179 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:22.212209 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:28.292206 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:31.364121 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:37.444195 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:40.516108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:46.596160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:49.668120 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:55.748134 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:58.820202 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:04.900183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:07.972128 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:14.052140 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:17.124242 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:23.204175 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:26.276172 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:32.356183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:35.428256 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:41.508181 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:44.580142 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:50.660193 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:53.732160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:59.812151 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:02.884164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:08.964174 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:12.036185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:18.116178 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:21.188147 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:27.268137 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:30.340177 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:33.345074 1128788 start.go:364] duration metric: took 4m12.599457373s to acquireMachinesLock for "embed-certs-767719"
	I0318 14:20:33.345136 1128788 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:33.345145 1128788 fix.go:54] fixHost starting: 
	I0318 14:20:33.345584 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:33.345638 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:33.362007 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0318 14:20:33.362504 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:33.363014 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:20:33.363037 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:33.363432 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:33.363634 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:33.363787 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:20:33.365593 1128788 fix.go:112] recreateIfNeeded on embed-certs-767719: state=Stopped err=<nil>
	I0318 14:20:33.365619 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	W0318 14:20:33.365792 1128788 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:33.367525 1128788 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767719" ...
	I0318 14:20:33.368930 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Start
	I0318 14:20:33.369145 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring networks are active...
	I0318 14:20:33.370041 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network default is active
	I0318 14:20:33.370474 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network mk-embed-certs-767719 is active
	I0318 14:20:33.370832 1128788 main.go:141] libmachine: (embed-certs-767719) Getting domain xml...
	I0318 14:20:33.371609 1128788 main.go:141] libmachine: (embed-certs-767719) Creating domain...
	I0318 14:20:34.596425 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting to get IP...
	I0318 14:20:34.597292 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.597677 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.597753 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.597666 1130210 retry.go:31] will retry after 244.312377ms: waiting for machine to come up
	I0318 14:20:34.843360 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.844039 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.844082 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.843988 1130210 retry.go:31] will retry after 388.782007ms: waiting for machine to come up
	I0318 14:20:35.234931 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.235304 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.235334 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.235252 1130210 retry.go:31] will retry after 449.871291ms: waiting for machine to come up
	I0318 14:20:33.342334 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:33.342408 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.342790 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:20:33.342823 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.343061 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:20:33.344920 1128583 machine.go:97] duration metric: took 4m37.408911801s to provisionDockerMachine
	I0318 14:20:33.344982 1128583 fix.go:56] duration metric: took 4m37.431584024s for fixHost
	I0318 14:20:33.344992 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 4m37.431613044s
	W0318 14:20:33.345017 1128583 start.go:713] error starting host: provision: host is not running
	W0318 14:20:33.345209 1128583 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 14:20:33.345223 1128583 start.go:728] Will try again in 5 seconds ...
	I0318 14:20:35.687048 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.687565 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.687604 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.687508 1130210 retry.go:31] will retry after 470.225551ms: waiting for machine to come up
	I0318 14:20:36.159138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.159642 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.159668 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.159590 1130210 retry.go:31] will retry after 638.634635ms: waiting for machine to come up
	I0318 14:20:36.799431 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.799820 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.799857 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.799764 1130210 retry.go:31] will retry after 758.659569ms: waiting for machine to come up
	I0318 14:20:37.559752 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:37.560189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:37.560224 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:37.560116 1130210 retry.go:31] will retry after 1.163344023s: waiting for machine to come up
	I0318 14:20:38.724981 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:38.725498 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:38.725561 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:38.725341 1130210 retry.go:31] will retry after 1.155934539s: waiting for machine to come up
	I0318 14:20:39.882622 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:39.883025 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:39.883074 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:39.882966 1130210 retry.go:31] will retry after 1.832023161s: waiting for machine to come up
	I0318 14:20:38.347296 1128583 start.go:360] acquireMachinesLock for no-preload-188109: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:20:41.717138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:41.717723 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:41.717757 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:41.717642 1130210 retry.go:31] will retry after 1.526824443s: waiting for machine to come up
	I0318 14:20:43.246389 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:43.246960 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:43.246997 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:43.246901 1130210 retry.go:31] will retry after 2.608273558s: waiting for machine to come up
	I0318 14:20:45.858375 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:45.858919 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:45.858943 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:45.858871 1130210 retry.go:31] will retry after 2.272908905s: waiting for machine to come up
	I0318 14:20:48.134345 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:48.134774 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:48.134826 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:48.134739 1130210 retry.go:31] will retry after 3.671073699s: waiting for machine to come up
	I0318 14:20:53.273198 1128964 start.go:364] duration metric: took 4m11.791347901s to acquireMachinesLock for "default-k8s-diff-port-075922"
	I0318 14:20:53.273284 1128964 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:53.273295 1128964 fix.go:54] fixHost starting: 
	I0318 14:20:53.273834 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:53.273879 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:53.291440 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0318 14:20:53.291988 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:53.292571 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:20:53.292605 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:53.292931 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:53.293125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:20:53.293278 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:20:53.294856 1128964 fix.go:112] recreateIfNeeded on default-k8s-diff-port-075922: state=Stopped err=<nil>
	I0318 14:20:53.294889 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	W0318 14:20:53.295063 1128964 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:53.297784 1128964 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075922" ...
	I0318 14:20:51.809859 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.810477 1128788 main.go:141] libmachine: (embed-certs-767719) Found IP for machine: 192.168.72.45
	I0318 14:20:51.810503 1128788 main.go:141] libmachine: (embed-certs-767719) Reserving static IP address...
	I0318 14:20:51.810518 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has current primary IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.811061 1128788 main.go:141] libmachine: (embed-certs-767719) Reserved static IP address: 192.168.72.45
	I0318 14:20:51.811104 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.811112 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting for SSH to be available...
	I0318 14:20:51.811137 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | skip adding static IP to network mk-embed-certs-767719 - found existing host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"}
	I0318 14:20:51.811163 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Getting to WaitForSSH function...
	I0318 14:20:51.813739 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814076 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.814121 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH client type: external
	I0318 14:20:51.814225 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa (-rw-------)
	I0318 14:20:51.814282 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:20:51.814327 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | About to run SSH command:
	I0318 14:20:51.814346 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | exit 0
	I0318 14:20:51.944192 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | SSH cmd err, output: <nil>: 
	I0318 14:20:51.944624 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetConfigRaw
	I0318 14:20:51.945477 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:51.948244 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.948667 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.948711 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.949069 1128788 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:20:51.949305 1128788 machine.go:94] provisionDockerMachine start ...
	I0318 14:20:51.949327 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:51.949596 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:51.952267 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952653 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.952703 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952836 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:51.953047 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953200 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953376 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:51.953525 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:51.953772 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:51.953785 1128788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:20:52.068806 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:20:52.068847 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069162 1128788 buildroot.go:166] provisioning hostname "embed-certs-767719"
	I0318 14:20:52.069198 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069500 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.072258 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072750 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.072785 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072939 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.073146 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073312 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073492 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.073730 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.073916 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.073934 1128788 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767719 && echo "embed-certs-767719" | sudo tee /etc/hostname
	I0318 14:20:52.204197 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767719
	
	I0318 14:20:52.204258 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.207520 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.207927 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.207959 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.208178 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.208478 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208740 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208961 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.209164 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.209352 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.209370 1128788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767719/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:20:52.337185 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:52.337220 1128788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:20:52.337243 1128788 buildroot.go:174] setting up certificates
	I0318 14:20:52.337253 1128788 provision.go:84] configureAuth start
	I0318 14:20:52.337264 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.337561 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:52.340693 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341061 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.341098 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341280 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.343239 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343570 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.343595 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343709 1128788 provision.go:143] copyHostCerts
	I0318 14:20:52.343782 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:20:52.343794 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:20:52.343888 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:20:52.344001 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:20:52.344010 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:20:52.344038 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:20:52.344095 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:20:52.344103 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:20:52.344126 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:20:52.344220 1128788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767719 san=[127.0.0.1 192.168.72.45 embed-certs-767719 localhost minikube]
	I0318 14:20:52.550241 1128788 provision.go:177] copyRemoteCerts
	I0318 14:20:52.550380 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:20:52.550433 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.553182 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553591 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.553626 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553824 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.554056 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.554241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.554392 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:52.645341 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:20:52.672476 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:20:52.698609 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:20:52.724434 1128788 provision.go:87] duration metric: took 387.165868ms to configureAuth
	I0318 14:20:52.724471 1128788 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:20:52.724727 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:20:52.724827 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.727323 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727700 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.727764 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727882 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.728098 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728443 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.728626 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.728859 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.728878 1128788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:20:53.012918 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:20:53.012959 1128788 machine.go:97] duration metric: took 1.063639009s to provisionDockerMachine
	I0318 14:20:53.012976 1128788 start.go:293] postStartSetup for "embed-certs-767719" (driver="kvm2")
	I0318 14:20:53.012990 1128788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:20:53.013039 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.013471 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:20:53.013505 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.016524 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.016929 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.016961 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.017153 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.017372 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.017582 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.017846 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.107977 1128788 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:20:53.113146 1128788 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:20:53.113184 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:20:53.113302 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:20:53.113423 1128788 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:20:53.113558 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:20:53.125166 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:53.152094 1128788 start.go:296] duration metric: took 139.099686ms for postStartSetup
	I0318 14:20:53.152147 1128788 fix.go:56] duration metric: took 19.807001958s for fixHost
	I0318 14:20:53.152194 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.155058 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155371 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.155401 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155643 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.155908 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156138 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156307 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.156536 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:53.156770 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:53.156786 1128788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:20:53.272998 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771653.240528844
	
	I0318 14:20:53.273029 1128788 fix.go:216] guest clock: 1710771653.240528844
	I0318 14:20:53.273046 1128788 fix.go:229] Guest: 2024-03-18 14:20:53.240528844 +0000 UTC Remote: 2024-03-18 14:20:53.15215228 +0000 UTC m=+272.563569050 (delta=88.376564ms)
	I0318 14:20:53.273075 1128788 fix.go:200] guest clock delta is within tolerance: 88.376564ms
	I0318 14:20:53.273083 1128788 start.go:83] releasing machines lock for "embed-certs-767719", held for 19.927965733s
	I0318 14:20:53.273118 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.273431 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:53.276309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276740 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.276768 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276958 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277493 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277716 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277806 1128788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:20:53.277851 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.277976 1128788 ssh_runner.go:195] Run: cat /version.json
	I0318 14:20:53.278002 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.280799 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.280853 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281234 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281263 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281289 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281518 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281616 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281767 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281850 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281945 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282028 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282090 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.282179 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.386584 1128788 ssh_runner.go:195] Run: systemctl --version
	I0318 14:20:53.393371 1128788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:20:53.547565 1128788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:20:53.554182 1128788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:20:53.554266 1128788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:20:53.573031 1128788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:20:53.573071 1128788 start.go:494] detecting cgroup driver to use...
	I0318 14:20:53.573197 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:20:53.591649 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:20:53.607279 1128788 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:20:53.607359 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:20:53.624327 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:20:53.640398 1128788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:20:53.759979 1128788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:20:53.931294 1128788 docker.go:233] disabling docker service ...
	I0318 14:20:53.931381 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:20:53.954433 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:20:53.969396 1128788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:20:54.107898 1128788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:20:54.241874 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:20:54.257748 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:20:54.278981 1128788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:20:54.279057 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.293329 1128788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:20:54.293390 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.304838 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.316646 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.328623 1128788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:20:54.340540 1128788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:20:54.352368 1128788 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:20:54.352433 1128788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:20:54.368965 1128788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:20:54.389268 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:54.511182 1128788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:20:54.657685 1128788 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:20:54.657798 1128788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:20:54.663591 1128788 start.go:562] Will wait 60s for crictl version
	I0318 14:20:54.663670 1128788 ssh_runner.go:195] Run: which crictl
	I0318 14:20:54.667903 1128788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:20:54.707961 1128788 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:20:54.708065 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.738240 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.773562 1128788 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:20:54.775286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:54.778784 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779228 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:54.779265 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779498 1128788 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:20:54.784575 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:54.799207 1128788 kubeadm.go:877] updating cluster {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:20:54.799380 1128788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:20:54.799440 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:54.839309 1128788 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:20:54.839387 1128788 ssh_runner.go:195] Run: which lz4
	I0318 14:20:54.844323 1128788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:20:54.850487 1128788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:20:54.850524 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:20:53.299380 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Start
	I0318 14:20:53.299595 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring networks are active...
	I0318 14:20:53.300497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network default is active
	I0318 14:20:53.300887 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network mk-default-k8s-diff-port-075922 is active
	I0318 14:20:53.301316 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Getting domain xml...
	I0318 14:20:53.302079 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Creating domain...
	I0318 14:20:54.607619 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting to get IP...
	I0318 14:20:54.608510 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609075 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.609050 1130331 retry.go:31] will retry after 282.377323ms: waiting for machine to come up
	I0318 14:20:54.892766 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893323 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.893259 1130331 retry.go:31] will retry after 264.840581ms: waiting for machine to come up
	I0318 14:20:55.160018 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160536 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160578 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.160460 1130331 retry.go:31] will retry after 402.458985ms: waiting for machine to come up
	I0318 14:20:55.564282 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564773 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.564727 1130331 retry.go:31] will retry after 382.70672ms: waiting for machine to come up
	I0318 14:20:55.949676 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950183 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950218 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.950122 1130331 retry.go:31] will retry after 676.466466ms: waiting for machine to come up
	I0318 14:20:56.798325 1128788 crio.go:444] duration metric: took 1.954051074s to copy over tarball
	I0318 14:20:56.798418 1128788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:20:59.431722 1128788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633260911s)
	I0318 14:20:59.431777 1128788 crio.go:451] duration metric: took 2.633417573s to extract the tarball
	I0318 14:20:59.431788 1128788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:20:59.476265 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:59.534130 1128788 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:20:59.534161 1128788 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:20:59.534173 1128788 kubeadm.go:928] updating node { 192.168.72.45 8443 v1.28.4 crio true true} ...
	I0318 14:20:59.534357 1128788 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:20:59.534499 1128788 ssh_runner.go:195] Run: crio config
	I0318 14:20:59.594778 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:20:59.594814 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:20:59.594831 1128788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:20:59.594894 1128788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767719 NodeName:embed-certs-767719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:20:59.595092 1128788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:20:59.595203 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:20:59.610298 1128788 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:20:59.610388 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:20:59.624050 1128788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 14:20:59.644283 1128788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:20:59.663987 1128788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0318 14:20:59.685379 1128788 ssh_runner.go:195] Run: grep 192.168.72.45	control-plane.minikube.internal$ /etc/hosts
	I0318 14:20:59.690360 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:59.705657 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:59.839158 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:20:59.857617 1128788 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719 for IP: 192.168.72.45
	I0318 14:20:59.857642 1128788 certs.go:194] generating shared ca certs ...
	I0318 14:20:59.857674 1128788 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:20:59.857839 1128788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:20:59.857882 1128788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:20:59.857893 1128788 certs.go:256] generating profile certs ...
	I0318 14:20:59.858006 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/client.key
	I0318 14:20:59.858061 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key.f59f641c
	I0318 14:20:59.858098 1128788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key
	I0318 14:20:59.858268 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:20:59.858301 1128788 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:20:59.858308 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:20:59.858331 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:20:59.858360 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:20:59.858382 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:20:59.858424 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:59.859110 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:20:59.901101 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:20:59.947010 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:20:59.990882 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:00.032358 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 14:21:00.070194 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:00.108670 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:00.137760 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:00.168481 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:00.199292 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:00.228315 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:00.257409 1128788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:00.277720 1128788 ssh_runner.go:195] Run: openssl version
	I0318 14:21:00.284138 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:00.296443 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302083 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302160 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.308748 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:00.322025 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:00.334654 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340319 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340404 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.347454 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:00.359627 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:00.371865 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377236 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377335 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.387041 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:00.404525 1128788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:00.412919 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:00.422577 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:00.434217 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:00.444535 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:00.452863 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:00.459979 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:00.467503 1128788 kubeadm.go:391] StartCluster: {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:00.467680 1128788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:00.467780 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.507833 1128788 cri.go:89] found id: ""
	I0318 14:21:00.507926 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:00.519958 1128788 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:00.519982 1128788 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:00.520011 1128788 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:00.520066 1128788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:00.532229 1128788 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:00.533479 1128788 kubeconfig.go:125] found "embed-certs-767719" server: "https://192.168.72.45:8443"
	I0318 14:21:00.536185 1128788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:00.548434 1128788 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.45
	I0318 14:21:00.548484 1128788 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:00.548498 1128788 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:00.548551 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.592096 1128788 cri.go:89] found id: ""
	I0318 14:21:00.592168 1128788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:00.610826 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:00.622294 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:00.622330 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:00.622386 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:00.633009 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:00.633089 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:20:56.628134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628708 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628747 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:56.628643 1130331 retry.go:31] will retry after 703.45784ms: waiting for machine to come up
	I0318 14:20:57.334203 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334666 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334702 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:57.334600 1130331 retry.go:31] will retry after 1.177266521s: waiting for machine to come up
	I0318 14:20:58.513803 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514452 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514485 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:58.514389 1130331 retry.go:31] will retry after 1.389627955s: waiting for machine to come up
	I0318 14:20:59.906109 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906663 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906750 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:59.906632 1130331 retry.go:31] will retry after 1.239662517s: waiting for machine to come up
	I0318 14:21:01.147929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148325 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:01.148248 1130331 retry.go:31] will retry after 2.183067358s: waiting for machine to come up
	I0318 14:21:00.644684 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:00.921213 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:00.921307 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:00.932412 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.943408 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:00.943481 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.955574 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:00.966416 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:00.966483 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:00.978014 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:00.993622 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:01.128726 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.331974 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.203164646s)
	I0318 14:21:02.332035 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.574592 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.686011 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.821189 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:02.821373 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.322200 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.822207 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.838586 1128788 api_server.go:72] duration metric: took 1.017395673s to wait for apiserver process to appear ...
	I0318 14:21:03.838622 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:03.838660 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.839282 1128788 api_server.go:269] stopped: https://192.168.72.45:8443/healthz: Get "https://192.168.72.45:8443/healthz": dial tcp 192.168.72.45:8443: connect: connection refused
	I0318 14:21:04.339675 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.333080 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333620 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333648 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:03.333583 1130331 retry.go:31] will retry after 2.259124316s: waiting for machine to come up
	I0318 14:21:05.594356 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594823 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:05.594754 1130331 retry.go:31] will retry after 2.492274875s: waiting for machine to come up
	I0318 14:21:07.054330 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:07.054373 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:07.054392 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.073841 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.073894 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.339285 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.345307 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.345340 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.838915 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.846722 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.846759 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:08.339409 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:08.344790 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:21:08.358050 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:08.358097 1128788 api_server.go:131] duration metric: took 4.519466088s to wait for apiserver health ...
	I0318 14:21:08.358121 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:21:08.358130 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:08.359982 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:08.361428 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:08.378195 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:08.409269 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:08.421874 1128788 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:08.421960 1128788 system_pods.go:61] "coredns-5dd5756b68-4dmw2" [324897fc-dd26-47f1-b8bc-4d2ed721a576] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:08.421971 1128788 system_pods.go:61] "etcd-embed-certs-767719" [df147cb8-989c-408d-ade8-547858a8c2bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:08.421982 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [82f7d170-3b3c-448c-b824-6d263c5c1128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:08.421989 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [cd4dd4f3-a727-4864-b0e9-a89758537de9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:08.422002 1128788 system_pods.go:61] "kube-proxy-mtx9w" [b46b48ff-e4c0-4595-82c4-7c0c86103262] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:08.422010 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [63774f42-c85e-467f-9bd3-0c78d44b2681] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:08.422022 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-jr9wp" [e40748e2-ebc3-4c4f-a9cc-01bbc7416f35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:08.422030 1128788 system_pods.go:61] "storage-provisioner" [1b51e6a7-2693-4d0b-b47e-ccbcb1e46424] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:08.422047 1128788 system_pods.go:74] duration metric: took 12.746875ms to wait for pod list to return data ...
	I0318 14:21:08.422058 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:08.432361 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:08.432461 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:08.432483 1128788 node_conditions.go:105] duration metric: took 10.415127ms to run NodePressure ...
	I0318 14:21:08.432524 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:08.730544 1128788 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:08.735970 1128788 kubeadm.go:733] kubelet initialised
	I0318 14:21:08.736001 1128788 kubeadm.go:734] duration metric: took 5.422027ms waiting for restarted kubelet to initialise ...
	I0318 14:21:08.736042 1128788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:08.745586 1128788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:08.090446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090834 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:08.090779 1130331 retry.go:31] will retry after 3.31085892s: waiting for machine to come up
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:11.405935 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has current primary IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406539 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Found IP for machine: 192.168.83.39
	I0318 14:21:11.406553 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserving static IP address...
	I0318 14:21:11.407015 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.407048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | skip adding static IP to network mk-default-k8s-diff-port-075922 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"}
	I0318 14:21:11.407066 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserved static IP address: 192.168.83.39
	I0318 14:21:11.407081 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for SSH to be available...
	I0318 14:21:11.407093 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Getting to WaitForSSH function...
	I0318 14:21:11.409327 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409674 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.409706 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409895 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH client type: external
	I0318 14:21:11.409919 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa (-rw-------)
	I0318 14:21:11.410034 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:11.410065 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | About to run SSH command:
	I0318 14:21:11.410089 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | exit 0
	I0318 14:21:11.544258 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:11.544698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetConfigRaw
	I0318 14:21:11.545370 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.548333 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.548729 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.548764 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.549053 1128964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:21:11.549275 1128964 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:11.549295 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:11.549533 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.551799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552156 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.552186 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552280 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.552482 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552657 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552797 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.552994 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.553261 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.553278 1128964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:11.665093 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:11.665132 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665456 1128964 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075922"
	I0318 14:21:11.665493 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665730 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.668911 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.669413 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669679 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.669923 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670319 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.670530 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.670718 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.670734 1128964 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075922 && echo "default-k8s-diff-port-075922" | sudo tee /etc/hostname
	I0318 14:21:11.807520 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075922
	
	I0318 14:21:11.807552 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.810614 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811011 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.811047 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811257 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.811480 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811941 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.812155 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.812361 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.812387 1128964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075922/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:11.942984 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:11.943022 1128964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:11.943078 1128964 buildroot.go:174] setting up certificates
	I0318 14:21:11.943094 1128964 provision.go:84] configureAuth start
	I0318 14:21:11.943108 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.943441 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.946659 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947091 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.947125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947328 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.949852 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950275 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.950310 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950496 1128964 provision.go:143] copyHostCerts
	I0318 14:21:11.950579 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:11.950596 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:11.950679 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:11.950859 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:11.950873 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:11.950898 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:11.950964 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:11.950971 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:11.950988 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:11.951041 1128964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075922 san=[127.0.0.1 192.168.83.39 default-k8s-diff-port-075922 localhost minikube]
	I0318 14:21:12.019678 1128964 provision.go:177] copyRemoteCerts
	I0318 14:21:12.019756 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:12.019788 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.023122 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023603 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.023639 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023862 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.024077 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.024294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.024445 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.112914 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:12.142575 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 14:21:12.171747 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:12.200144 1128964 provision.go:87] duration metric: took 257.034667ms to configureAuth
	I0318 14:21:12.200177 1128964 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:12.200401 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:21:12.200515 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.203573 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.203978 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.204019 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.204160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.204379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204658 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.205131 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.205335 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.205367 1128964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:12.494965 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:12.494997 1128964 machine.go:97] duration metric: took 945.707691ms to provisionDockerMachine
	I0318 14:21:12.495012 1128964 start.go:293] postStartSetup for "default-k8s-diff-port-075922" (driver="kvm2")
	I0318 14:21:12.495026 1128964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:12.495048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.495450 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:12.495486 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.498444 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.498821 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498928 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.499166 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.499363 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.499560 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.588350 1128964 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:12.593611 1128964 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:12.593638 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:12.593714 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:12.593788 1128964 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:12.593875 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:12.605751 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:12.633577 1128964 start.go:296] duration metric: took 138.54984ms for postStartSetup
	I0318 14:21:12.633621 1128964 fix.go:56] duration metric: took 19.360327899s for fixHost
	I0318 14:21:12.633645 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.636446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636822 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.636850 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636989 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.637237 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637428 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637596 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.637786 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.637988 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.638002 1128964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:12.749326 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771672.727120819
	
	I0318 14:21:12.749355 1128964 fix.go:216] guest clock: 1710771672.727120819
	I0318 14:21:12.749364 1128964 fix.go:229] Guest: 2024-03-18 14:21:12.727120819 +0000 UTC Remote: 2024-03-18 14:21:12.633625447 +0000 UTC m=+271.308784721 (delta=93.495372ms)
	I0318 14:21:12.749386 1128964 fix.go:200] guest clock delta is within tolerance: 93.495372ms
	I0318 14:21:12.749392 1128964 start.go:83] releasing machines lock for "default-k8s-diff-port-075922", held for 19.476136638s
	I0318 14:21:12.749418 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.749732 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:12.752996 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753471 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.753506 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753815 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754448 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754651 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754744 1128964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:12.754791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.754943 1128964 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:12.754970 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.758153 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758303 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758628 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758660 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758694 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758758 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758927 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758988 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759057 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759157 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759251 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759292 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.759371 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.841423 1128964 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:12.868154 1128964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:13.020652 1128964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:13.028168 1128964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:13.028267 1128964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:13.047225 1128964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:13.047264 1128964 start.go:494] detecting cgroup driver to use...
	I0318 14:21:13.047361 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:13.064518 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:13.080271 1128964 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:13.080356 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:13.095583 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:13.110387 1128964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:13.250934 1128964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:13.450657 1128964 docker.go:233] disabling docker service ...
	I0318 14:21:13.450738 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:13.471701 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:13.488157 1128964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:13.644961 1128964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:13.811333 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:13.828584 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:13.852476 1128964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:13.852557 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.864849 1128964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:13.864951 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.877723 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.890337 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.902558 1128964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:13.915858 1128964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:13.928426 1128964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:13.928526 1128964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:13.951761 1128964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:13.964785 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:14.144432 1128964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:14.311928 1128964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:14.312078 1128964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:14.319279 1128964 start.go:562] Will wait 60s for crictl version
	I0318 14:21:14.319347 1128964 ssh_runner.go:195] Run: which crictl
	I0318 14:21:14.325325 1128964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:14.385244 1128964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:14.385344 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.426242 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.460725 1128964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:21:10.753176 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:12.756558 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:13.760252 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:13.760295 1128788 pod_ready.go:81] duration metric: took 5.014671723s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:13.760315 1128788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:14.461974 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:14.465116 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465652 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:14.465696 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465903 1128964 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:14.471039 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:14.486098 1128964 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:14.486307 1128964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:21:14.486379 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:14.526373 1128964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:21:14.526476 1128964 ssh_runner.go:195] Run: which lz4
	I0318 14:21:14.531145 1128964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:14.536370 1128964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:14.536412 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:21:15.769517 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:17.772721 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:18.769552 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:18.769590 1128788 pod_ready.go:81] duration metric: took 5.009265127s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:18.769610 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:16.505094 1128964 crio.go:444] duration metric: took 1.973996063s to copy over tarball
	I0318 14:21:16.505250 1128964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:19.251009 1128964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.745717226s)
	I0318 14:21:19.251045 1128964 crio.go:451] duration metric: took 2.745895394s to extract the tarball
	I0318 14:21:19.251053 1128964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:19.308392 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:19.363143 1128964 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:21:19.363172 1128964 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:21:19.363181 1128964 kubeadm.go:928] updating node { 192.168.83.39 8444 v1.28.4 crio true true} ...
	I0318 14:21:19.363313 1128964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-075922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:19.363415 1128964 ssh_runner.go:195] Run: crio config
	I0318 14:21:19.415995 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:19.416028 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:19.416048 1128964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:19.416085 1128964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075922 NodeName:default-k8s-diff-port-075922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:21:19.416297 1128964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-075922"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:19.416379 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:21:19.427340 1128964 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:19.427420 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:19.438470 1128964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0318 14:21:19.459945 1128964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:19.479728 1128964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0318 14:21:19.500079 1128964 ssh_runner.go:195] Run: grep 192.168.83.39	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:19.504746 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:19.519931 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:19.654822 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:19.675414 1128964 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922 for IP: 192.168.83.39
	I0318 14:21:19.675443 1128964 certs.go:194] generating shared ca certs ...
	I0318 14:21:19.675462 1128964 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:19.675647 1128964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:19.675707 1128964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:19.675722 1128964 certs.go:256] generating profile certs ...
	I0318 14:21:19.675861 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/client.key
	I0318 14:21:19.683399 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key.675162fd
	I0318 14:21:19.683522 1128964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key
	I0318 14:21:19.683667 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:19.683715 1128964 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:19.683730 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:19.683782 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:19.683870 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:19.683897 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:19.683940 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:19.684679 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:19.743065 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:19.787963 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:19.833491 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:19.865359 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 14:21:19.903294 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:19.932298 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:19.961860 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 14:21:19.992150 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:20.020750 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:20.047780 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:20.074566 1128964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:20.094524 1128964 ssh_runner.go:195] Run: openssl version
	I0318 14:21:20.101181 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:20.118970 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124628 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124707 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.133462 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:20.150447 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:20.165864 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173488 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173627 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.183147 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:20.200417 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:20.213973 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219407 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219488 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.226491 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:20.240299 1128964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:20.245960 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:20.253073 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:20.260144 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:20.267546 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:20.274740 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:20.282502 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:20.289722 1128964 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:20.289817 1128964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:20.289877 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.338941 1128964 cri.go:89] found id: ""
	I0318 14:21:20.339036 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:20.350677 1128964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:20.350706 1128964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:20.350718 1128964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:20.350775 1128964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:20.362216 1128964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:20.363622 1128964 kubeconfig.go:125] found "default-k8s-diff-port-075922" server: "https://192.168.83.39:8444"
	I0318 14:21:20.366606 1128964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:20.379417 1128964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.39
	I0318 14:21:20.379460 1128964 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:20.379481 1128964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:20.379556 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.423139 1128964 cri.go:89] found id: ""
	I0318 14:21:20.423224 1128964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:20.444111 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:20.456698 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:20.456725 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:20.456787 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:21:20.467432 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:20.467502 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:20.478894 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:21:20.490123 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:20.490216 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:20.501744 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.514020 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:20.514084 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.526805 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:21:20.538374 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:20.538452 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:20.550880 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:20.562302 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:20.687288 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.085960 1128788 pod_ready.go:102] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:21.781260 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.781287 1128788 pod_ready.go:81] duration metric: took 3.011668835s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.781297 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789501 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.789537 1128788 pod_ready.go:81] duration metric: took 8.231402ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789552 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797445 1128788 pod_ready.go:92] pod "kube-proxy-mtx9w" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.797483 1128788 pod_ready.go:81] duration metric: took 7.921289ms for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797496 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804084 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.804120 1128788 pod_ready.go:81] duration metric: took 6.613559ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804132 1128788 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:23.812751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:21.432193 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.821781 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.899411 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.984494 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:21.984624 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.484985 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.985119 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:23.009700 1128964 api_server.go:72] duration metric: took 1.025195346s to wait for apiserver process to appear ...
	I0318 14:21:23.009739 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:23.009764 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:23.010328 1128964 api_server.go:269] stopped: https://192.168.83.39:8444/healthz: Get "https://192.168.83.39:8444/healthz": dial tcp 192.168.83.39:8444: connect: connection refused
	I0318 14:21:23.510606 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.307173 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.307217 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.307238 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.345507 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.345551 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.510350 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.515684 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:26.515721 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.010509 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.015492 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:27.015526 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.510772 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.520209 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:21:27.527945 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:27.527978 1128964 api_server.go:131] duration metric: took 4.518232257s to wait for apiserver health ...
	I0318 14:21:27.527988 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:27.527994 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:27.529779 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:26.313296 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:28.811916 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:27.531259 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:27.573198 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:27.619813 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:27.629766 1128964 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:27.629805 1128964 system_pods.go:61] "coredns-5dd5756b68-dsrcd" [86ac331d-2539-4fbb-8cf8-56f58afa6f99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:27.629815 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [0de3bd3b-6ee2-46e2-83f7-7c637115879f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:27.629821 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [e1e689c8-642c-428e-bddf-43c2c1524563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:27.629832 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [1a200d0f-53e6-4e44-a8b0-28b9d21f763e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:27.629837 1128964 system_pods.go:61] "kube-proxy-wbnvd" [6bf13050-a150-4133-93e2-71ddcad443ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:27.629842 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [87bc17b3-75c6-4d6b-9b8f-29823398100a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:27.629847 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-4vrvb" [d12dc531-720c-4a7a-93af-69b9005666fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:27.629852 1128964 system_pods.go:61] "storage-provisioner" [856896cd-daec-4873-8f9c-c7cadeb3c16e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:27.629857 1128964 system_pods.go:74] duration metric: took 10.000416ms to wait for pod list to return data ...
	I0318 14:21:27.629866 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:27.634112 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:27.634147 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:27.634159 1128964 node_conditions.go:105] duration metric: took 4.287491ms to run NodePressure ...
	I0318 14:21:27.634190 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:27.976277 1128964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980894 1128964 kubeadm.go:733] kubelet initialised
	I0318 14:21:27.980920 1128964 kubeadm.go:734] duration metric: took 4.609836ms waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980932 1128964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:27.986151 1128964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:29.993963 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:31.313401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:33.811753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:36.252999 1128583 start.go:364] duration metric: took 57.905644198s to acquireMachinesLock for "no-preload-188109"
	I0318 14:21:36.253054 1128583 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:36.253063 1128583 fix.go:54] fixHost starting: 
	I0318 14:21:36.253510 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:36.253545 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:36.271856 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0318 14:21:36.272254 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:36.272790 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:21:36.272822 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:36.273237 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:36.273446 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:36.273614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:21:36.275414 1128583 fix.go:112] recreateIfNeeded on no-preload-188109: state=Stopped err=<nil>
	I0318 14:21:36.275440 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	W0318 14:21:36.275623 1128583 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:36.277528 1128583 out.go:177] * Restarting existing kvm2 VM for "no-preload-188109" ...
	I0318 14:21:31.995770 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.495078 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:35.812465 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:37.820263 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.313683 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.278973 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Start
	I0318 14:21:36.279160 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring networks are active...
	I0318 14:21:36.280043 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network default is active
	I0318 14:21:36.280495 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network mk-no-preload-188109 is active
	I0318 14:21:36.281014 1128583 main.go:141] libmachine: (no-preload-188109) Getting domain xml...
	I0318 14:21:36.281995 1128583 main.go:141] libmachine: (no-preload-188109) Creating domain...
	I0318 14:21:37.644409 1128583 main.go:141] libmachine: (no-preload-188109) Waiting to get IP...
	I0318 14:21:37.645406 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.645958 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.646047 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.645922 1130597 retry.go:31] will retry after 223.965782ms: waiting for machine to come up
	I0318 14:21:37.871397 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.871933 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.871971 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.871882 1130597 retry.go:31] will retry after 272.743353ms: waiting for machine to come up
	I0318 14:21:38.146680 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.147278 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.147309 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.147211 1130597 retry.go:31] will retry after 414.468616ms: waiting for machine to come up
	I0318 14:21:38.563199 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.563768 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.563794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.563718 1130597 retry.go:31] will retry after 582.588791ms: waiting for machine to come up
	I0318 14:21:39.147611 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.148410 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.148436 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.148315 1130597 retry.go:31] will retry after 686.425224ms: waiting for machine to come up
	I0318 14:21:39.836964 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.837647 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.837677 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.837593 1130597 retry.go:31] will retry after 878.564369ms: waiting for machine to come up
	I0318 14:21:40.717644 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:40.718346 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:40.718380 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:40.718276 1130597 retry.go:31] will retry after 1.183201382s: waiting for machine to come up
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:36.995480 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:39.005382 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.495300 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.495345 1128964 pod_ready.go:81] duration metric: took 12.509166884s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.495358 1128964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504432 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.504467 1128964 pod_ready.go:81] duration metric: took 9.100778ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504480 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515466 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.515506 1128964 pod_ready.go:81] duration metric: took 11.017212ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515519 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525891 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.525929 1128964 pod_ready.go:81] duration metric: took 10.399892ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525943 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534161 1128964 pod_ready.go:92] pod "kube-proxy-wbnvd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.534196 1128964 pod_ready.go:81] duration metric: took 8.245545ms for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534208 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:42.314504 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:44.812532 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:41.902972 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:41.903707 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:41.903736 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:41.903670 1130597 retry.go:31] will retry after 1.282612289s: waiting for machine to come up
	I0318 14:21:43.188745 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:43.189303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:43.189332 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:43.189257 1130597 retry.go:31] will retry after 1.175485401s: waiting for machine to come up
	I0318 14:21:44.366602 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:44.367162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:44.367191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:44.367121 1130597 retry.go:31] will retry after 1.700678954s: waiting for machine to come up
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:41.690971 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:41.691009 1128964 pod_ready.go:81] duration metric: took 1.156786821s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:41.691020 1128964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:44.189110 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.201644 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.813954 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:48.817402 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.069196 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:46.069747 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:46.069797 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:46.069687 1130597 retry.go:31] will retry after 2.354521412s: waiting for machine to come up
	I0318 14:21:48.425714 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:48.426186 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:48.426219 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:48.426147 1130597 retry.go:31] will retry after 2.74319235s: waiting for machine to come up
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.699275 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:50.699690 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.311999 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:53.811066 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.173264 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.173844 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:51.173880 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:51.173784 1130597 retry.go:31] will retry after 4.489599719s: waiting for machine to come up
	I0318 14:21:55.665080 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665639 1128583 main.go:141] libmachine: (no-preload-188109) Found IP for machine: 192.168.61.40
	I0318 14:21:55.665675 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has current primary IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665686 1128583 main.go:141] libmachine: (no-preload-188109) Reserving static IP address...
	I0318 14:21:55.666111 1128583 main.go:141] libmachine: (no-preload-188109) Reserved static IP address: 192.168.61.40
	I0318 14:21:55.666149 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.666164 1128583 main.go:141] libmachine: (no-preload-188109) Waiting for SSH to be available...
	I0318 14:21:55.666191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | skip adding static IP to network mk-no-preload-188109 - found existing host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"}
	I0318 14:21:55.666205 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Getting to WaitForSSH function...
	I0318 14:21:55.668473 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668792 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.668837 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668947 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH client type: external
	I0318 14:21:55.668989 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa (-rw-------)
	I0318 14:21:55.669020 1128583 main.go:141] libmachine: (no-preload-188109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:55.669043 1128583 main.go:141] libmachine: (no-preload-188109) DBG | About to run SSH command:
	I0318 14:21:55.669095 1128583 main.go:141] libmachine: (no-preload-188109) DBG | exit 0
	I0318 14:21:55.796228 1128583 main.go:141] libmachine: (no-preload-188109) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:55.796668 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetConfigRaw
	I0318 14:21:55.797378 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:55.800241 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.800716 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.800771 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.801150 1128583 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:21:55.801416 1128583 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:55.801441 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:55.801690 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.804667 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.700698 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.198658 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.805029 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.805269 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.806759 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.806983 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807220 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807421 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.807623 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.807952 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.807982 1128583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:55.920939 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:55.920993 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921259 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:21:55.921292 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921510 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.924430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.924921 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.924962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.925153 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.925431 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925792 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.926029 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.926301 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.926320 1128583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-188109 && echo "no-preload-188109" | sudo tee /etc/hostname
	I0318 14:21:56.051873 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-188109
	
	I0318 14:21:56.051915 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.055015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055387 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.055422 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055659 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.055887 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056058 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056190 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.056318 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.056508 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.056525 1128583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188109/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:56.178366 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:56.178401 1128583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:56.178443 1128583 buildroot.go:174] setting up certificates
	I0318 14:21:56.178454 1128583 provision.go:84] configureAuth start
	I0318 14:21:56.178465 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:56.178859 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:56.181995 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.182457 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182724 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.185337 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185623 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.185649 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185880 1128583 provision.go:143] copyHostCerts
	I0318 14:21:56.185968 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:56.185983 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:56.186073 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:56.186249 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:56.186264 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:56.186296 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:56.186392 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:56.186406 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:56.186432 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:56.186511 1128583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.no-preload-188109 san=[127.0.0.1 192.168.61.40 localhost minikube no-preload-188109]
	I0318 14:21:56.332196 1128583 provision.go:177] copyRemoteCerts
	I0318 14:21:56.332267 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:56.332295 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.335310 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335604 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.335639 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335787 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.336002 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.336170 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.336310 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.427529 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:56.459132 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:21:56.488690 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:56.516043 1128583 provision.go:87] duration metric: took 337.568576ms to configureAuth
	I0318 14:21:56.516088 1128583 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:56.516309 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:21:56.516457 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.519576 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.519998 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.520059 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.520237 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.520460 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520677 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520876 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.521065 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.521290 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.521307 1128583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:56.831034 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:56.831076 1128583 machine.go:97] duration metric: took 1.029643209s to provisionDockerMachine
	I0318 14:21:56.831092 1128583 start.go:293] postStartSetup for "no-preload-188109" (driver="kvm2")
	I0318 14:21:56.831107 1128583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:56.831126 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:56.831549 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:56.831611 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.834520 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.834962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.834992 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.835234 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.835415 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.835582 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.835743 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.927694 1128583 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:56.932973 1128583 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:56.933002 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:56.933088 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:56.933200 1128583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:56.933345 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:56.943594 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:56.971483 1128583 start.go:296] duration metric: took 140.368525ms for postStartSetup
	I0318 14:21:56.971564 1128583 fix.go:56] duration metric: took 20.718501273s for fixHost
	I0318 14:21:56.971618 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.974721 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975185 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.975250 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975409 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.975679 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.975885 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.976049 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.976242 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.976438 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.976453 1128583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:57.089795 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771717.066528661
	
	I0318 14:21:57.089823 1128583 fix.go:216] guest clock: 1710771717.066528661
	I0318 14:21:57.089834 1128583 fix.go:229] Guest: 2024-03-18 14:21:57.066528661 +0000 UTC Remote: 2024-03-18 14:21:56.971568576 +0000 UTC m=+361.214853207 (delta=94.960085ms)
	I0318 14:21:57.089865 1128583 fix.go:200] guest clock delta is within tolerance: 94.960085ms
	I0318 14:21:57.089873 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 20.836840869s
	I0318 14:21:57.089898 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.090297 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:57.094015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094517 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.094563 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094920 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095607 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095844 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095978 1128583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:57.096034 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.096182 1128583 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:57.096221 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.099303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099329 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099754 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099854 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099869 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.100103 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100118 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100339 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100568 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100578 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100766 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.100781 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.203060 1128583 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:57.209943 1128583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:57.368686 1128583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:57.376289 1128583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:57.376375 1128583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:57.394365 1128583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:57.394405 1128583 start.go:494] detecting cgroup driver to use...
	I0318 14:21:57.394488 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:57.412172 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:57.428895 1128583 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:57.428988 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:57.445064 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:57.461255 1128583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:57.596381 1128583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:57.774782 1128583 docker.go:233] disabling docker service ...
	I0318 14:21:57.774890 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:57.791820 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:57.807412 1128583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:57.961890 1128583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:58.118122 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:58.133994 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:58.155336 1128583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:58.155429 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.167537 1128583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:58.167642 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.180814 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.193997 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.206817 1128583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:58.220843 1128583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:58.232012 1128583 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:58.232073 1128583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:58.246610 1128583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:58.260393 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:58.416723 1128583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:58.588776 1128583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:58.588864 1128583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:58.594689 1128583 start.go:562] Will wait 60s for crictl version
	I0318 14:21:58.594787 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:58.599287 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:58.634954 1128583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:58.635059 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.667031 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.703316 1128583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:21:55.812079 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:57.813027 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.310988 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:58.704763 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:58.708030 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708495 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:58.708527 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708738 1128583 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:58.713408 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:58.726934 1128583 kubeadm.go:877] updating cluster {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:58.727067 1128583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:21:58.727105 1128583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:58.764875 1128583 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:21:58.764904 1128583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:58.764976 1128583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.765019 1128583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.765091 1128583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.765117 1128583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.765142 1128583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.765158 1128583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.765125 1128583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.765098 1128583 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766495 1128583 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766589 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.766592 1128583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.766768 1128583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.766924 1128583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.766492 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.919274 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 14:21:58.934955 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.945887 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.954907 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.961334 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.976485 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.991515 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.100572 1128583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 14:21:59.100624 1128583 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.100684 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.125681 1128583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 14:21:59.125740 1128583 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.125799 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.138461 1128583 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 14:21:59.138521 1128583 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.138579 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149655 1128583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 14:21:59.149697 1128583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.149763 1128583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149803 1128583 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.149831 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.149839 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149790 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149875 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.231815 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.231851 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 14:21:59.231959 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:21:59.232052 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.232060 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.232064 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.232148 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.317997 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 14:21:59.318029 1128583 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318083 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318116 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318158 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318213 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318240 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.318246 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 14:21:59.318252 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318281 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 14:21:59.318315 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.364549 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.703771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.200048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:02.313802 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.812944 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:03.246360 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.928017963s)
	I0318 14:22:03.246414 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246364 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.928251379s)
	I0318 14:22:03.246429 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 14:22:03.246439 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.92820974s)
	I0318 14:22:03.246454 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246468 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246415 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.928141711s)
	I0318 14:22:03.246512 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246515 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246516 1128583 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.88192635s)
	I0318 14:22:03.246587 1128583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 14:22:03.246641 1128583 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:03.246704 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.203222 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.699887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.813730 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.311491 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.317600 1128583 ssh_runner.go:235] Completed: which crictl: (3.070863461s)
	I0318 14:22:06.317700 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:06.317775 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.071235517s)
	I0318 14:22:06.317805 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 14:22:06.317837 1128583 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.317907 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.370328 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 14:22:06.370435 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.243401402s)
	I0318 14:22:08.613903 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.295918452s)
	I0318 14:22:08.613917 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 14:22:08.613941 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:08.613994 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.199978 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.200394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.312752 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:13.812826 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.076840 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462814214s)
	I0318 14:22:11.076881 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 14:22:11.076917 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:11.076968 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:13.332851 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.25584847s)
	I0318 14:22:13.332896 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 14:22:13.332932 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:13.333002 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:14.705785 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.372744893s)
	I0318 14:22:14.705843 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 14:22:14.705881 1128583 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:14.705945 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:15.467380 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 14:22:15.467432 1128583 cache_images.go:123] Successfully loaded all cached images
	I0318 14:22:15.467439 1128583 cache_images.go:92] duration metric: took 16.702522125s to LoadCachedImages
	I0318 14:22:15.467456 1128583 kubeadm.go:928] updating node { 192.168.61.40 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:22:15.467619 1128583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-188109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:22:15.467790 1128583 ssh_runner.go:195] Run: crio config
	I0318 14:22:15.520678 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:15.520705 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:15.520718 1128583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:22:15.520740 1128583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.40 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188109 NodeName:no-preload-188109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:22:15.520893 1128583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188109"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:22:15.520965 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:22:15.534187 1128583 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:22:15.534260 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:22:15.546509 1128583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 14:22:15.567029 1128583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:22:15.586866 1128583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 14:22:15.609161 1128583 ssh_runner.go:195] Run: grep 192.168.61.40	control-plane.minikube.internal$ /etc/hosts
	I0318 14:22:15.614800 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:22:15.630088 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:22:15.754729 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:22:15.774062 1128583 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109 for IP: 192.168.61.40
	I0318 14:22:15.774093 1128583 certs.go:194] generating shared ca certs ...
	I0318 14:22:15.774114 1128583 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:22:15.774374 1128583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:22:15.774434 1128583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:22:15.774448 1128583 certs.go:256] generating profile certs ...
	I0318 14:22:15.774537 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/client.key
	I0318 14:22:15.774607 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key.8d4024a9
	I0318 14:22:15.774652 1128583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key
	I0318 14:22:15.774833 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:22:15.774871 1128583 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:22:15.774882 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:22:15.774926 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:22:15.774972 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:22:15.775031 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:22:15.775106 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:22:15.775902 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.698561 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:14.199200 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.200026 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.312392 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:18.812463 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:15.821418 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:22:15.874044 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:22:15.910814 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:22:15.965889 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:22:16.001003 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:22:16.030033 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:22:16.060519 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:22:16.089952 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:22:16.119397 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:22:16.150036 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:22:16.179489 1128583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:22:16.201823 1128583 ssh_runner.go:195] Run: openssl version
	I0318 14:22:16.208496 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:22:16.222723 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228161 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228239 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.234994 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:22:16.248672 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:22:16.262626 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268255 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268361 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.274868 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:22:16.287251 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:22:16.299690 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304633 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304718 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.311230 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:22:16.325483 1128583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:22:16.331012 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:22:16.338731 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:22:16.346289 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:22:16.353403 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:22:16.359967 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:22:16.367151 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:22:16.373719 1128583 kubeadm.go:391] StartCluster: {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:22:16.373823 1128583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:22:16.373921 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.417874 1128583 cri.go:89] found id: ""
	I0318 14:22:16.417957 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:22:16.431026 1128583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:22:16.431057 1128583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:22:16.431065 1128583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:22:16.431125 1128583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:22:16.445445 1128583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:22:16.446576 1128583 kubeconfig.go:125] found "no-preload-188109" server: "https://192.168.61.40:8443"
	I0318 14:22:16.449104 1128583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:22:16.461001 1128583 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.40
	I0318 14:22:16.461042 1128583 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:22:16.461056 1128583 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:22:16.461104 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.502356 1128583 cri.go:89] found id: ""
	I0318 14:22:16.502437 1128583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:22:16.525636 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:22:16.538600 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:22:16.538626 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:22:16.538677 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:22:16.550720 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:22:16.550803 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:22:16.562585 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:22:16.573439 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:22:16.573502 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:22:16.585548 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.596619 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:22:16.596706 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.608458 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:22:16.619498 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:22:16.619587 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:22:16.631359 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:22:16.643420 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:16.765437 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:17.862932 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097434993s)
	I0318 14:22:17.862980 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.097197 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.168390 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.295118 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:22:18.295225 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.795897 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.295431 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.335088 1128583 api_server.go:72] duration metric: took 1.039967082s to wait for apiserver process to appear ...
	I0318 14:22:19.335128 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:22:19.335163 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:19.335912 1128583 api_server.go:269] stopped: https://192.168.61.40:8443/healthz: Get "https://192.168.61.40:8443/healthz": dial tcp 192.168.61.40:8443: connect: connection refused
	I0318 14:22:19.836266 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.699537 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:21.199910 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:22.338349 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.338383 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.338402 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.351154 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.351190 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.835446 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.841044 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:22.841092 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.335665 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.347092 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.347126 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.835731 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.840517 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.840559 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:24.336151 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:24.340981 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:22:24.354524 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:22:24.354560 1128583 api_server.go:131] duration metric: took 5.019424083s to wait for apiserver health ...
	I0318 14:22:24.354570 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:24.354576 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:24.356602 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:22:20.818751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:23.312003 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:24.358089 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:22:24.375159 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:22:24.426409 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:22:24.452289 1128583 system_pods.go:59] 8 kube-system pods found
	I0318 14:22:24.452326 1128583 system_pods.go:61] "coredns-76f75df574-cksb5" [9cd14e15-7b0f-4978-b667-cba1a54db074] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:22:24.452333 1128583 system_pods.go:61] "etcd-no-preload-188109" [fa7d3ae7-2ac1-4275-8739-686c2e3b7569] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:22:24.452345 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [135ee544-ca83-41ab-9cb2-070587eb3b77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:22:24.452351 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [fd91846b-6210-4cab-ae0f-5e942b4f596e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:22:24.452361 1128583 system_pods.go:61] "kube-proxy-k5kcr" [a1649d3a-9063-49c3-a8a5-04879eee108b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:22:24.452367 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [5bbb4165-ca8f-4807-ad01-bb35c56b6aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:22:24.452375 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-6pn6n" [004af8d8-fa8c-475c-9604-ed98ccceb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:22:24.452390 1128583 system_pods.go:61] "storage-provisioner" [45cae6ca-e3ad-4f7e-9d10-96e091160e4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:22:24.452404 1128583 system_pods.go:74] duration metric: took 25.960889ms to wait for pod list to return data ...
	I0318 14:22:24.452417 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:22:24.456337 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:22:24.456367 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:22:24.456404 1128583 node_conditions.go:105] duration metric: took 3.980296ms to run NodePressure ...
	I0318 14:22:24.456424 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:24.738808 1128583 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743864 1128583 kubeadm.go:733] kubelet initialised
	I0318 14:22:24.743893 1128583 kubeadm.go:734] duration metric: took 5.054661ms waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743905 1128583 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:22:24.749832 1128583 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.700193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.198261 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:25.810553 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:27.811576 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.310813 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.757033 1128583 pod_ready.go:102] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:28.757522 1128583 pod_ready.go:92] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:28.757562 1128583 pod_ready.go:81] duration metric: took 4.007696709s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:28.757576 1128583 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:30.767877 1128583 pod_ready.go:102] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.199688 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.199994 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:32.311356 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.311807 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.265717 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:31.265745 1128583 pod_ready.go:81] duration metric: took 2.508162139s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:31.265755 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:33.273718 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:35.275477 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.200137 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.698589 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:36.812617 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.312289 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:37.774164 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.273935 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.273990 1128583 pod_ready.go:81] duration metric: took 8.008204942s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.274005 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280284 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.280313 1128583 pod_ready.go:81] duration metric: took 6.300519ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280324 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286027 1128583 pod_ready.go:92] pod "kube-proxy-k5kcr" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.286052 1128583 pod_ready.go:81] duration metric: took 5.721757ms for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286061 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292404 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.292450 1128583 pod_ready.go:81] duration metric: took 6.381121ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292462 1128583 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.699760 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.198691 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.199259 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.812494 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:44.312890 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.300167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:43.803022 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.199771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:45.698884 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.811124 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.827580 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.300768 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.300891 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.800442 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:47.699312 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.199049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:51.312163 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.812711 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:52.800573 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:54.801034 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:52.698863 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:55.200175 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.311249 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:58.312362 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:57.300207 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.300788 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:57.698375 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.706517 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:00.814865 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:03.312810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:01.800156 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.302577 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:02.198461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.199002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.199307 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:05.811934 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:08.312176 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:10.312721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.800938 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:09.302833 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:08.699862 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.199072 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.315288 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.813053 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.800023 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.300632 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:13.199539 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:15.700968 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:17.311266 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:19.311540 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:16.800323 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.801497 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:18.198887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:20.698596 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.312215 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.810513 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.299039 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.300143 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.798699 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.699705 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.200662 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.811810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.813013 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.311457 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.800554 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.304272 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:27.700002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.200376 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.311860 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.313084 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.802042 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:35.299530 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:32.212811 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.700061 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:36.811811 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.312768 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.300434 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.300586 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.199672 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.698826 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.810759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.812721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.800736 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:41.707557 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.199509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.311703 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.811077 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.301804 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.800280 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.801802 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.698786 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:49.198083 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:51.198509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.812029 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.311681 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.300917 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.301653 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.699341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.699481 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.810495 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.810917 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:59.812265 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.800712 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.300089 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:58.198656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.699394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.311601 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.311669 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.300584 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.300628 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:03.198564 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:05.198935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.811819 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.812462 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.300681 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.300912 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.799297 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:07.699433 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.198062 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.311072 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:13.311533 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:15.313064 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:12.799619 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.800637 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:12.201234 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.698988 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.811663 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.812068 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:16.801294 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.301959 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.199048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.200040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:22.312401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.812678 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.799684 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.301211 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.699674 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.199382 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.312672 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.315087 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:26.800594 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.299763 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:26.698769 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:28.700212 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.200571 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.811482 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.812980 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.299818 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.300656 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.798808 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:33.698578 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.698656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.814759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:38.311080 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:40.312503 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:37.800024 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.801292 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.698736 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.699014 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.813028 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.814259 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.300121 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.802581 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:42.198235 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.199088 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.312598 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.814542 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.300765 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.801469 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.699746 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.198789 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:51.199313 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.311753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.311952 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.301766 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.199935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:55.699461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.811981 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.814200 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.799464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.800623 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.198049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:00.199613 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.312698 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.811687 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.299462 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.300239 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:05.301621 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.200341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:04.698040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:06.312239 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.811427 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:07.800597 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:10.300964 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:06.700315 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:09.198241 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.198296 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.312458 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:13.312602 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:12.799474 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:14.800216 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:13.199314 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.199666 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.812120 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.813219 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.814195 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:16.800432 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.299589 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.698783 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.698980 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.804733 1128788 pod_ready.go:81] duration metric: took 4m0.000568505s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:21.804764 1128788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:21.804783 1128788 pod_ready.go:38] duration metric: took 4m13.068724908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:21.804834 1128788 kubeadm.go:591] duration metric: took 4m21.284795634s to restartPrimaryControlPlane
	W0318 14:25:21.804919 1128788 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:21.804954 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:21.300889 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:23.800547 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:25.803188 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:21.699436 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:24.199193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:28.300495 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:30.300870 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:26.700384 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:29.203012 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:32.800506 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:35.299464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:31.698372 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.699450 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.199443 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:37.299695 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:39.800741 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:38.199879 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:40.698620 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:41.801098 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:44.301379 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:41.692077 1128964 pod_ready.go:81] duration metric: took 4m0.00104s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:41.692109 1128964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:41.692136 1128964 pod_ready.go:38] duration metric: took 4m13.711186182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:41.692170 1128964 kubeadm.go:591] duration metric: took 4m21.341445822s to restartPrimaryControlPlane
	W0318 14:25:41.692279 1128964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:41.692345 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:46.800687 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:49.300012 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:54.173048 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.368065771s)
	I0318 14:25:54.173145 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:54.192139 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:54.204909 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:54.217096 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:54.217126 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:54.217182 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:54.227905 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:54.228009 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:54.239854 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:54.250668 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:54.250744 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:54.263509 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.274202 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:54.274265 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.285342 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:54.296064 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:54.296157 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:54.307985 1128788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:54.371118 1128788 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:25:54.371202 1128788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:54.551187 1128788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:54.551377 1128788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:54.551551 1128788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:54.780034 1128788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:54.782426 1128788 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:54.782545 1128788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:54.782650 1128788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:54.782735 1128788 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:54.782829 1128788 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:54.782930 1128788 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:54.783213 1128788 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:54.783717 1128788 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:54.784390 1128788 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:54.784849 1128788 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:54.785263 1128788 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:54.785725 1128788 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:54.785826 1128788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:55.130998 1128788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:55.387076 1128788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:55.517240 1128788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:51.300209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:53.303010 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.800703 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.906565 1128788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:55.907198 1128788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:55.909674 1128788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:55.912196 1128788 out.go:204]   - Booting up control plane ...
	I0318 14:25:55.912323 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:55.912407 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:55.912494 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:55.932596 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:55.935171 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:55.935520 1128788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:56.083395 1128788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:58.300288 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:00.800291 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:02.086878 1128788 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002842 seconds
	I0318 14:26:02.087052 1128788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:02.102499 1128788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:02.637889 1128788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:02.638152 1128788 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-767719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:03.157386 1128788 kubeadm.go:309] [bootstrap-token] Using token: do2whq.efhsaljmpmqgv9gj
	I0318 14:26:03.159248 1128788 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:03.159429 1128788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:03.167328 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:03.180628 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:03.185253 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:03.190014 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:03.202714 1128788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:03.223282 1128788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:03.504303 1128788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:03.614837 1128788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:03.614872 1128788 kubeadm.go:309] 
	I0318 14:26:03.614978 1128788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:03.615004 1128788 kubeadm.go:309] 
	I0318 14:26:03.615107 1128788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:03.615117 1128788 kubeadm.go:309] 
	I0318 14:26:03.615149 1128788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:03.615219 1128788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:03.615285 1128788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:03.615293 1128788 kubeadm.go:309] 
	I0318 14:26:03.615354 1128788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:03.615365 1128788 kubeadm.go:309] 
	I0318 14:26:03.615421 1128788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:03.615430 1128788 kubeadm.go:309] 
	I0318 14:26:03.615486 1128788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:03.615578 1128788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:03.615669 1128788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:03.615679 1128788 kubeadm.go:309] 
	I0318 14:26:03.615778 1128788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:03.615887 1128788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:03.615897 1128788 kubeadm.go:309] 
	I0318 14:26:03.615998 1128788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616120 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:03.616149 1128788 kubeadm.go:309] 	--control-plane 
	I0318 14:26:03.616159 1128788 kubeadm.go:309] 
	I0318 14:26:03.616266 1128788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:03.616276 1128788 kubeadm.go:309] 
	I0318 14:26:03.616371 1128788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616500 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:03.617330 1128788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:03.617374 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:26:03.617384 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:03.619394 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:03.620836 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:03.665582 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:03.812834 1128788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:03.812897 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:03.812943 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-767719 minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=embed-certs-767719 minikube.k8s.io/primary=true
	I0318 14:26:03.899419 1128788 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:04.104407 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:04.604499 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.104532 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.605047 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:02.800707 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:04.802167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:06.105187 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:06.604462 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.104411 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.605096 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.104448 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.604430 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.104707 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.605130 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.104955 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.605165 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.300575 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:09.798776 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:11.104436 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.605273 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.104851 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.604819 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.104669 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.605089 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.105486 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.604568 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.104455 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.604422 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.799935 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:13.800907 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:15.801754 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:16.105107 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:16.205506 1128788 kubeadm.go:1107] duration metric: took 12.39266353s to wait for elevateKubeSystemPrivileges
	W0318 14:26:16.205558 1128788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:16.205570 1128788 kubeadm.go:393] duration metric: took 5m15.738081871s to StartCluster
	I0318 14:26:16.205599 1128788 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.205720 1128788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:16.208645 1128788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.209157 1128788 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:16.210915 1128788 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:16.209206 1128788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:16.209401 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:16.212258 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:16.212275 1128788 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767719"
	I0318 14:26:16.212351 1128788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767719"
	I0318 14:26:16.212260 1128788 addons.go:69] Setting metrics-server=true in profile "embed-certs-767719"
	I0318 14:26:16.212415 1128788 addons.go:234] Setting addon metrics-server=true in "embed-certs-767719"
	W0318 14:26:16.212431 1128788 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:16.212469 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212260 1128788 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767719"
	I0318 14:26:16.212512 1128788 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767719"
	W0318 14:26:16.212527 1128788 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:16.212560 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.212983 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213003 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213028 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.213040 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.231532 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0318 14:26:16.231543 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0318 14:26:16.232128 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0318 14:26:16.232280 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232284 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232882 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.232907 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.232922 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.233258 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233284 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.233360 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.233479 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233501 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.235956 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236151 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236372 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236411 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.236545 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236568 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.240163 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.244336 1128788 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767719"
	W0318 14:26:16.244370 1128788 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:16.244407 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.244845 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.244894 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.257940 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0318 14:26:16.258701 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.259359 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.259386 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.259769 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.260030 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.262272 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.262286 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0318 14:26:16.264459 1128788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:16.262834 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.265430 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0318 14:26:16.266198 1128788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.266220 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:16.266240 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.266482 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.266663 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.266676 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267253 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.267277 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267753 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.268456 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.268605 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.269068 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.269098 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.269804 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270398 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.270420 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270711 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.270989 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.271183 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.271362 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.271984 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.273854 1128788 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:14.305258 1128964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.612890386s)
	I0318 14:26:14.305324 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:14.325572 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:26:14.337875 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:26:14.350490 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:26:14.350530 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:26:14.350592 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:26:14.361521 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:26:14.361612 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:26:14.372767 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:26:14.383545 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:26:14.383614 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:26:14.394057 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.404187 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:26:14.404261 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.415029 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:26:14.425738 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:26:14.425820 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:26:14.436847 1128964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:26:14.674909 1128964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:16.275278 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:16.275298 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:16.275323 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.278500 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.278909 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.278939 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.279230 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.279437 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.279612 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.279748 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.286716 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0318 14:26:16.287176 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.287651 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.287678 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.288057 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.288248 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.290084 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.290359 1128788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.290381 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:16.290404 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.293253 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293662 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.293688 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293886 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.294078 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.294241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.294398 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.460832 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:16.537089 1128788 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550362 1128788 node_ready.go:49] node "embed-certs-767719" has status "Ready":"True"
	I0318 14:26:16.550391 1128788 node_ready.go:38] duration metric: took 13.195546ms for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550405 1128788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:16.557745 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:16.638531 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:16.638565 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:16.664638 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.762661 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:16.762713 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:16.792712 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.859169 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:16.859200 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:16.954827 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:18.103559 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.103592 1128788 pod_ready.go:81] duration metric: took 1.545818643s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.103606 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.256039 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591350359s)
	I0318 14:26:18.256112 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256129 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256483 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256513 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256530 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256528 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.256541 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256918 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256936 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256950 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.264761 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.264788 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.265133 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.265164 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.265193 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.652953 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.653088 1128788 pod_ready.go:81] duration metric: took 549.466665ms for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.653124 1128788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674506 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.674553 1128788 pod_ready.go:81] duration metric: took 21.386005ms for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674568 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.680422 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.887663901s)
	I0318 14:26:18.680486 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680498 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.680875 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.680887 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.680903 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.680921 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680928 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.681198 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.681199 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.681277 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.711919 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.711954 1128788 pod_ready.go:81] duration metric: took 37.376915ms for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.711968 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730096 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.730129 1128788 pod_ready.go:81] duration metric: took 18.151839ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730145 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.756000 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.801120989s)
	I0318 14:26:18.756076 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756091 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756416 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756435 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756445 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756452 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756849 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756883 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756895 1128788 addons.go:470] Verifying addon metrics-server=true in "embed-certs-767719"
	I0318 14:26:18.756917 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.759019 1128788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 14:26:18.760442 1128788 addons.go:505] duration metric: took 2.551236037s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 14:26:18.942164 1128788 pod_ready.go:92] pod "kube-proxy-f4547" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.942196 1128788 pod_ready.go:81] duration metric: took 212.040337ms for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.942205 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341772 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:19.341808 1128788 pod_ready.go:81] duration metric: took 399.594033ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341820 1128788 pod_ready.go:38] duration metric: took 2.791403027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:19.341841 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:19.341921 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:19.362110 1128788 api_server.go:72] duration metric: took 3.152894755s to wait for apiserver process to appear ...
	I0318 14:26:19.362150 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:19.362209 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:26:19.368138 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:26:19.369583 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:19.369608 1128788 api_server.go:131] duration metric: took 7.450993ms to wait for apiserver health ...
	I0318 14:26:19.369617 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:19.545388 1128788 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:19.545423 1128788 system_pods.go:61] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.545428 1128788 system_pods.go:61] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.545431 1128788 system_pods.go:61] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.545434 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.545438 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.545441 1128788 system_pods.go:61] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.545443 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.545449 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.545455 1128788 system_pods.go:61] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.545464 1128788 system_pods.go:74] duration metric: took 175.840386ms to wait for pod list to return data ...
	I0318 14:26:19.545473 1128788 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:19.741364 1128788 default_sa.go:45] found service account: "default"
	I0318 14:26:19.741405 1128788 default_sa.go:55] duration metric: took 195.920075ms for default service account to be created ...
	I0318 14:26:19.741424 1128788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:19.945000 1128788 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:19.945039 1128788 system_pods.go:89] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.945047 1128788 system_pods.go:89] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.945053 1128788 system_pods.go:89] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.945060 1128788 system_pods.go:89] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.945066 1128788 system_pods.go:89] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.945070 1128788 system_pods.go:89] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.945076 1128788 system_pods.go:89] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.945087 1128788 system_pods.go:89] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.945097 1128788 system_pods.go:89] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.945110 1128788 system_pods.go:126] duration metric: took 203.67742ms to wait for k8s-apps to be running ...
	I0318 14:26:19.945122 1128788 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:19.945188 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:19.987286 1128788 system_svc.go:56] duration metric: took 42.149434ms WaitForService to wait for kubelet
	I0318 14:26:19.987328 1128788 kubeadm.go:576] duration metric: took 3.778120092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:19.987361 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:20.141763 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:20.141803 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:20.141822 1128788 node_conditions.go:105] duration metric: took 154.45408ms to run NodePressure ...
	I0318 14:26:20.141840 1128788 start.go:240] waiting for startup goroutines ...
	I0318 14:26:20.141851 1128788 start.go:245] waiting for cluster config update ...
	I0318 14:26:20.141867 1128788 start.go:254] writing updated cluster config ...
	I0318 14:26:20.142268 1128788 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:20.206832 1128788 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:20.209057 1128788 out.go:177] * Done! kubectl is now configured to use "embed-certs-767719" cluster and "default" namespace by default
	I0318 14:26:18.302228 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:20.799704 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.444912 1128964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:26:23.444993 1128964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:26:23.445098 1128964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:26:23.445212 1128964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:26:23.445359 1128964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:26:23.445461 1128964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:26:23.446790 1128964 out.go:204]   - Generating certificates and keys ...
	I0318 14:26:23.446904 1128964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:26:23.446986 1128964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:26:23.447102 1128964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:26:23.447194 1128964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:26:23.447309 1128964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:26:23.447376 1128964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:26:23.447453 1128964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:26:23.447529 1128964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:26:23.447607 1128964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:26:23.447693 1128964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:26:23.447741 1128964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:26:23.447856 1128964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:26:23.447937 1128964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:26:23.448019 1128964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:26:23.448121 1128964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:26:23.448194 1128964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:26:23.448311 1128964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:26:23.448422 1128964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:26:23.450038 1128964 out.go:204]   - Booting up control plane ...
	I0318 14:26:23.450174 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:26:23.450282 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:26:23.450371 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:26:23.450509 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:26:23.450633 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:26:23.450671 1128964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:26:23.450818 1128964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:23.450887 1128964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.005932 seconds
	I0318 14:26:23.450974 1128964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:23.451093 1128964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:23.451143 1128964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:23.451340 1128964 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-075922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:23.451414 1128964 kubeadm.go:309] [bootstrap-token] Using token: k51w96.h8xduusjdfbez3gf
	I0318 14:26:23.452848 1128964 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:23.452964 1128964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:23.453073 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:23.453269 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:23.453499 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:23.453664 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:23.453785 1128964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:23.453940 1128964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:23.454005 1128964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:23.454074 1128964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:23.454084 1128964 kubeadm.go:309] 
	I0318 14:26:23.454172 1128964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:23.454186 1128964 kubeadm.go:309] 
	I0318 14:26:23.454288 1128964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:23.454298 1128964 kubeadm.go:309] 
	I0318 14:26:23.454335 1128964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:23.454412 1128964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:23.454475 1128964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:23.454484 1128964 kubeadm.go:309] 
	I0318 14:26:23.454528 1128964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:23.454538 1128964 kubeadm.go:309] 
	I0318 14:26:23.454592 1128964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:23.454599 1128964 kubeadm.go:309] 
	I0318 14:26:23.454681 1128964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:23.454804 1128964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:23.454907 1128964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:23.454919 1128964 kubeadm.go:309] 
	I0318 14:26:23.455027 1128964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:23.455146 1128964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:23.455157 1128964 kubeadm.go:309] 
	I0318 14:26:23.455264 1128964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455401 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:23.455433 1128964 kubeadm.go:309] 	--control-plane 
	I0318 14:26:23.455441 1128964 kubeadm.go:309] 
	I0318 14:26:23.455551 1128964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:23.455560 1128964 kubeadm.go:309] 
	I0318 14:26:23.455666 1128964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455814 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:23.455838 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:26:23.455849 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:23.457678 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:22.801209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:25.305096 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.459285 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:23.475803 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:23.515652 1128964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-075922 minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=default-k8s-diff-port-075922 minikube.k8s.io/primary=true
	I0318 14:26:23.796828 1128964 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:23.796947 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.296970 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.797728 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.297564 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.797144 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:26.297056 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.800960 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:29.802967 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:26.798004 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.297935 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.797550 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.297031 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.797624 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.297549 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.797256 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.297964 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.797927 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:31.297742 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.300787 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:34.800941 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:31.797040 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.297155 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.797371 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.297809 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.797723 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.297045 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.797008 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.297030 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.797767 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.895914 1128964 kubeadm.go:1107] duration metric: took 12.380212538s to wait for elevateKubeSystemPrivileges
	W0318 14:26:35.895975 1128964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:35.895987 1128964 kubeadm.go:393] duration metric: took 5m15.606276512s to StartCluster
	I0318 14:26:35.896013 1128964 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.896123 1128964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:35.898023 1128964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.898324 1128964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:35.900235 1128964 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:35.898415 1128964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:35.898550 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:35.901588 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:35.901599 1128964 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901617 1128964 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901640 1128964 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901650 1128964 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:35.901665 1128964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-075922"
	I0318 14:26:35.901588 1128964 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901698 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.901723 1128964 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901735 1128964 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:35.901764 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.902055 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902088 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902097 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902126 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902130 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902169 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.919538 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0318 14:26:35.920140 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.920836 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.920864 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.921282 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.921940 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.921983 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.923313 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0318 14:26:35.923321 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0318 14:26:35.923742 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.923792 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.924263 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924280 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924381 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924395 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924710 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924733 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924893 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.925215 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.925235 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.928021 1128964 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.928047 1128964 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:35.928081 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.928422 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.928449 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.941908 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0318 14:26:35.942465 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.943114 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.943146 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.943757 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.943991 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.944493 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 14:26:35.944874 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.945387 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.945404 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.945865 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.945988 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.948302 1128964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:35.946821 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.947744 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 14:26:35.950087 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:35.950110 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:35.950135 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.950181 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.950664 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.951258 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.951295 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.951755 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.952146 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.953842 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954331 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.954353 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954360 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.954563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.956253 1128964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:35.954739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:35.956487 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.957743 1128964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:35.957764 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:35.957783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.957864 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.960451 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.960896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.960929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.961107 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.961281 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.961435 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.961565 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.968795 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0318 14:26:35.969191 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.969631 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.969646 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.969955 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.970117 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.971799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.972169 1128964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:35.972188 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:35.972206 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.974906 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975268 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.975301 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975551 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.975767 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.975958 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.976137 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:36.122420 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:36.139655 1128964 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160857 1128964 node_ready.go:49] node "default-k8s-diff-port-075922" has status "Ready":"True"
	I0318 14:26:36.160883 1128964 node_ready.go:38] duration metric: took 21.193343ms for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160893 1128964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:36.176832 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:36.240357 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:36.240385 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:36.261620 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:36.279644 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:36.294510 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:36.294546 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:36.374231 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:36.376166 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:36.419045 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:38.032072 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752379015s)
	I0318 14:26:38.032148 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032161 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032374 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770714521s)
	I0318 14:26:38.032416 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032427 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032623 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032652 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032660 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032683 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032796 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032814 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032817 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032835 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032848 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.033046 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033107 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033173 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.033149 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033259 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033284 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.112866 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.112896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.113337 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.113362 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.113384 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176199 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.757085355s)
	I0318 14:26:38.176281 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176302 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176669 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176683 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176697 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176707 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176716 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176955 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176969 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176980 1128964 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-075922"
	I0318 14:26:38.178714 1128964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:26:37.300219 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:39.293136 1128583 pod_ready.go:81] duration metric: took 4m0.000606722s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	E0318 14:26:39.293173 1128583 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:26:39.293203 1128583 pod_ready.go:38] duration metric: took 4m14.549283732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:39.293239 1128583 kubeadm.go:591] duration metric: took 4m22.862167815s to restartPrimaryControlPlane
	W0318 14:26:39.293320 1128583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:26:39.293362 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:38.180451 1128964 addons.go:505] duration metric: took 2.282033093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:26:38.194239 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:40.186091 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.186125 1128964 pod_ready.go:81] duration metric: took 4.009253844s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.186139 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193026 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.193059 1128964 pod_ready.go:81] duration metric: took 6.912513ms for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193069 1128964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199244 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.199272 1128964 pod_ready.go:81] duration metric: took 6.195834ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199283 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.204991 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.205019 1128964 pod_ready.go:81] duration metric: took 5.728459ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.205034 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214706 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.214730 1128964 pod_ready.go:81] duration metric: took 9.687528ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214739 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.581970 1128964 pod_ready.go:92] pod "kube-proxy-bzwvf" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.582045 1128964 pod_ready.go:81] duration metric: took 367.297496ms for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.582059 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981562 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.981592 1128964 pod_ready.go:81] duration metric: took 399.525488ms for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981601 1128964 pod_ready.go:38] duration metric: took 4.820697544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:40.981618 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:40.981676 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:40.998626 1128964 api_server.go:72] duration metric: took 5.100242538s to wait for apiserver process to appear ...
	I0318 14:26:40.998672 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:40.998703 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:26:41.010986 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:26:41.012714 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:41.012742 1128964 api_server.go:131] duration metric: took 14.061953ms to wait for apiserver health ...
	I0318 14:26:41.012750 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:41.186873 1128964 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:41.186910 1128964 system_pods.go:61] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.186917 1128964 system_pods.go:61] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.186922 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.186935 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.186943 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.186948 1128964 system_pods.go:61] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.186953 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.187013 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.187029 1128964 system_pods.go:61] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.187041 1128964 system_pods.go:74] duration metric: took 174.283401ms to wait for pod list to return data ...
	I0318 14:26:41.187054 1128964 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:41.381195 1128964 default_sa.go:45] found service account: "default"
	I0318 14:26:41.381238 1128964 default_sa.go:55] duration metric: took 194.17219ms for default service account to be created ...
	I0318 14:26:41.381252 1128964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:41.584896 1128964 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:41.584934 1128964 system_pods.go:89] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.584940 1128964 system_pods.go:89] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.584945 1128964 system_pods.go:89] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.584952 1128964 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.584957 1128964 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.584961 1128964 system_pods.go:89] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.584965 1128964 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.584974 1128964 system_pods.go:89] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.584980 1128964 system_pods.go:89] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.584996 1128964 system_pods.go:126] duration metric: took 203.730421ms to wait for k8s-apps to be running ...
	I0318 14:26:41.585011 1128964 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:41.585065 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:41.602211 1128964 system_svc.go:56] duration metric: took 17.185915ms WaitForService to wait for kubelet
	I0318 14:26:41.602253 1128964 kubeadm.go:576] duration metric: took 5.703881545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:41.602283 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:41.781292 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:41.781321 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:41.781333 1128964 node_conditions.go:105] duration metric: took 179.044515ms to run NodePressure ...
	I0318 14:26:41.781345 1128964 start.go:240] waiting for startup goroutines ...
	I0318 14:26:41.781352 1128964 start.go:245] waiting for cluster config update ...
	I0318 14:26:41.781363 1128964 start.go:254] writing updated cluster config ...
	I0318 14:26:41.781670 1128964 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:41.845950 1128964 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:41.848522 1128964 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-075922" cluster and "default" namespace by default
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:11.668940 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.375539998s)
	I0318 14:27:11.669036 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:11.687767 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:27:11.699135 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:11.710896 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:11.710924 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:11.710971 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:11.721562 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:11.721638 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:11.733335 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:11.744643 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:11.744724 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:11.755801 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.766424 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:11.766515 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.777734 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:11.788887 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:11.788972 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:11.800792 1128583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:11.858933 1128583 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:27:11.859030 1128583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:27:12.029485 1128583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:27:12.029703 1128583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:27:12.029833 1128583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:27:12.279174 1128583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:27:12.281285 1128583 out.go:204]   - Generating certificates and keys ...
	I0318 14:27:12.281400 1128583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:27:12.281507 1128583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:27:12.281633 1128583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:27:12.281726 1128583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:27:12.281844 1128583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:27:12.281938 1128583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:27:12.282031 1128583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:27:12.282121 1128583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:27:12.282218 1128583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:27:12.282325 1128583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:27:12.282392 1128583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:27:12.282470 1128583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:27:12.605106 1128583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:27:12.950706 1128583 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:27:13.067948 1128583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:27:13.340677 1128583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:27:13.393147 1128583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:27:13.393891 1128583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:27:13.396474 1128583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:27:13.398563 1128583 out.go:204]   - Booting up control plane ...
	I0318 14:27:13.398698 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:27:13.398814 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:27:13.398900 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:27:13.422155 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:27:13.423529 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:27:13.423626 1128583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:27:13.568295 1128583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:27:19.571958 1128583 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003509 seconds
	I0318 14:27:19.587644 1128583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:27:19.607417 1128583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:27:20.153253 1128583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:27:20.153526 1128583 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-188109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:27:20.671613 1128583 kubeadm.go:309] [bootstrap-token] Using token: oq5d1l.24j9td8ex727h998
	I0318 14:27:20.673250 1128583 out.go:204]   - Configuring RBAC rules ...
	I0318 14:27:20.673402 1128583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:27:20.680765 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:27:20.693884 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:27:20.698696 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:27:20.702572 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:27:20.710027 1128583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:27:20.725068 1128583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:27:20.981178 1128583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:27:21.104335 1128583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:27:21.107428 1128583 kubeadm.go:309] 
	I0318 14:27:21.107550 1128583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:27:21.107596 1128583 kubeadm.go:309] 
	I0318 14:27:21.107725 1128583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:27:21.107750 1128583 kubeadm.go:309] 
	I0318 14:27:21.107796 1128583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:27:21.107894 1128583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:27:21.107995 1128583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:27:21.108030 1128583 kubeadm.go:309] 
	I0318 14:27:21.108127 1128583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:27:21.108145 1128583 kubeadm.go:309] 
	I0318 14:27:21.108228 1128583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:27:21.108242 1128583 kubeadm.go:309] 
	I0318 14:27:21.108318 1128583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:27:21.108400 1128583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:27:21.108487 1128583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:27:21.108503 1128583 kubeadm.go:309] 
	I0318 14:27:21.108628 1128583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:27:21.108730 1128583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:27:21.108741 1128583 kubeadm.go:309] 
	I0318 14:27:21.108839 1128583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.108968 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:27:21.109031 1128583 kubeadm.go:309] 	--control-plane 
	I0318 14:27:21.109054 1128583 kubeadm.go:309] 
	I0318 14:27:21.109176 1128583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:27:21.109195 1128583 kubeadm.go:309] 
	I0318 14:27:21.109298 1128583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.109455 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:27:21.114992 1128583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:21.115128 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:27:21.115151 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:27:21.116940 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:27:21.118320 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:27:21.167945 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:27:21.256429 1128583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-188109 minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=no-preload-188109 minikube.k8s.io/primary=true
	I0318 14:27:21.315419 1128583 ops.go:34] apiserver oom_adj: -16
	I0318 14:27:21.530472 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.030814 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.531214 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.030869 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.530677 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.031137 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.531400 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.031455 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.530648 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.031501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.531399 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.031109 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.531261 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.030757 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.531295 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.030505 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.531501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.030996 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.530490 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.030520 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.531340 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.031217 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.531425 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.031231 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.531300 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.678904 1128583 kubeadm.go:1107] duration metric: took 12.422463336s to wait for elevateKubeSystemPrivileges
	W0318 14:27:33.678959 1128583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:27:33.678972 1128583 kubeadm.go:393] duration metric: took 5m17.305262011s to StartCluster
	I0318 14:27:33.678999 1128583 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.679119 1128583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:27:33.681595 1128583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.681893 1128583 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:27:33.683724 1128583 out.go:177] * Verifying Kubernetes components...
	I0318 14:27:33.682059 1128583 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:27:33.682122 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:27:33.685123 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:27:33.685131 1128583 addons.go:69] Setting default-storageclass=true in profile "no-preload-188109"
	I0318 14:27:33.685135 1128583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188109"
	I0318 14:27:33.685139 1128583 addons.go:69] Setting metrics-server=true in profile "no-preload-188109"
	I0318 14:27:33.685165 1128583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188109"
	I0318 14:27:33.685173 1128583 addons.go:234] Setting addon metrics-server=true in "no-preload-188109"
	I0318 14:27:33.685175 1128583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-188109"
	W0318 14:27:33.685182 1128583 addons.go:243] addon metrics-server should already be in state true
	W0318 14:27:33.685185 1128583 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:27:33.685231 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685238 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685573 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685575 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685613 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685617 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685629 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685637 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.703022 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0318 14:27:33.703262 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 14:27:33.703844 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704181 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704628 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704649 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.704715 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704736 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.705213 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705374 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705809 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705863 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.705911 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705987 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.706076 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0318 14:27:33.706558 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.707198 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.707222 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.707699 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.708354 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.712289 1128583 addons.go:234] Setting addon default-storageclass=true in "no-preload-188109"
	W0318 14:27:33.712323 1128583 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:27:33.712364 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.712795 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.712833 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.724381 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0318 14:27:33.724980 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.725587 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.725614 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.726054 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.726363 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.727777 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0318 14:27:33.728182 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.728497 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.730538 1128583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:27:33.729152 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.730851 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0318 14:27:33.732037 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:27:33.732055 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:27:33.732076 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.732113 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.732489 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.732593 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.732881 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.732979 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.732991 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.733604 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.734297 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.734329 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.735399 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.737266 1128583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:27:33.735988 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.736830 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.739081 1128583 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:33.739098 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:27:33.737327 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.739122 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.739142 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.740009 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.740263 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.740482 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.742702 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743181 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.743211 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743473 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.743706 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.743902 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.744097 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.752903 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0318 14:27:33.756275 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.756901 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.756932 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.757363 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.757603 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.759471 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.759732 1128583 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:33.759751 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:27:33.759772 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.762687 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763139 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.763162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763414 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.763599 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.763765 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.763919 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.942490 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:27:33.975796 1128583 node_ready.go:35] waiting up to 6m0s for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008100 1128583 node_ready.go:49] node "no-preload-188109" has status "Ready":"True"
	I0318 14:27:34.008135 1128583 node_ready.go:38] duration metric: took 32.281068ms for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008149 1128583 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:34.039370 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:34.067765 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:27:34.067798 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:27:34.088294 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:34.091931 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:34.121689 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:27:34.121722 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:27:34.183609 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:34.183638 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:27:34.264906 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:35.590900 1128583 pod_ready.go:92] pod "coredns-76f75df574-jk9v5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.590928 1128583 pod_ready.go:81] duration metric: took 1.551526097s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.590938 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605647 1128583 pod_ready.go:92] pod "coredns-76f75df574-xczpc" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.605675 1128583 pod_ready.go:81] duration metric: took 14.730232ms for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605685 1128583 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.613213 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.521243904s)
	I0318 14:27:35.613276 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613289 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613282 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.524948587s)
	I0318 14:27:35.613324 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.613811 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613813 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.613824 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613831 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614119 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614166 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614183 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.614191 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614192 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.614234 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614273 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614502 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614517 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.636576 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.636610 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.636920 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.636946 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.656945 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.656972 1128583 pod_ready.go:81] duration metric: took 51.280554ms for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.656983 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683260 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.683291 1128583 pod_ready.go:81] duration metric: took 26.301625ms for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683301 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.691855 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42688194s)
	I0318 14:27:35.691918 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.691934 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692300 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692325 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692336 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.692344 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692661 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.692701 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692709 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692721 1128583 addons.go:470] Verifying addon metrics-server=true in "no-preload-188109"
	I0318 14:27:35.694758 1128583 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:27:35.696004 1128583 addons.go:505] duration metric: took 2.013954954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:27:35.709010 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.709035 1128583 pod_ready.go:81] duration metric: took 25.726967ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.709045 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982032 1128583 pod_ready.go:92] pod "kube-proxy-qpxx5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.982080 1128583 pod_ready.go:81] duration metric: took 273.026354ms for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982094 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380184 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:36.380228 1128583 pod_ready.go:81] duration metric: took 398.123566ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380241 1128583 pod_ready.go:38] duration metric: took 2.372078145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:36.380264 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:27:36.380334 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:27:36.401316 1128583 api_server.go:72] duration metric: took 2.719374991s to wait for apiserver process to appear ...
	I0318 14:27:36.401358 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:27:36.401389 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:27:36.407212 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:27:36.408930 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:27:36.408966 1128583 api_server.go:131] duration metric: took 7.597771ms to wait for apiserver health ...
	I0318 14:27:36.408989 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:27:36.583053 1128583 system_pods.go:59] 9 kube-system pods found
	I0318 14:27:36.583099 1128583 system_pods.go:61] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.583107 1128583 system_pods.go:61] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.583112 1128583 system_pods.go:61] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.583116 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.583120 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.583123 1128583 system_pods.go:61] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.583127 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.583134 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.583138 1128583 system_pods.go:61] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.583147 1128583 system_pods.go:74] duration metric: took 174.139423ms to wait for pod list to return data ...
	I0318 14:27:36.583156 1128583 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:27:36.779733 1128583 default_sa.go:45] found service account: "default"
	I0318 14:27:36.779771 1128583 default_sa.go:55] duration metric: took 196.607194ms for default service account to be created ...
	I0318 14:27:36.779783 1128583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:27:36.982750 1128583 system_pods.go:86] 9 kube-system pods found
	I0318 14:27:36.982783 1128583 system_pods.go:89] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.982789 1128583 system_pods.go:89] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.982793 1128583 system_pods.go:89] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.982798 1128583 system_pods.go:89] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.982804 1128583 system_pods.go:89] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.982808 1128583 system_pods.go:89] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.982812 1128583 system_pods.go:89] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.982819 1128583 system_pods.go:89] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.982823 1128583 system_pods.go:89] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.982832 1128583 system_pods.go:126] duration metric: took 203.042771ms to wait for k8s-apps to be running ...
	I0318 14:27:36.982839 1128583 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:27:36.982902 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:37.000948 1128583 system_svc.go:56] duration metric: took 18.09435ms WaitForService to wait for kubelet
	I0318 14:27:37.000980 1128583 kubeadm.go:576] duration metric: took 3.319049387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:27:37.001005 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:27:37.180608 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:27:37.180639 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:27:37.180652 1128583 node_conditions.go:105] duration metric: took 179.641912ms to run NodePressure ...
	I0318 14:27:37.180665 1128583 start.go:240] waiting for startup goroutines ...
	I0318 14:27:37.180672 1128583 start.go:245] waiting for cluster config update ...
	I0318 14:27:37.180686 1128583 start.go:254] writing updated cluster config ...
	I0318 14:27:37.181004 1128583 ssh_runner.go:195] Run: rm -f paused
	I0318 14:27:37.236286 1128583 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:27:37.238455 1128583 out.go:177] * Done! kubectl is now configured to use "no-preload-188109" cluster and "default" namespace by default
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.322335217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95ad5280-d056-48cd-a26c-5e2c59613734 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.323847850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=129391e5-1f69-410c-84d0-475b2f8770e2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.324341904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772599324308957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=129391e5-1f69-410c-84d0-475b2f8770e2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.325286927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef7f21a6-a9a1-4c60-96ed-2a8a43d6f171 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.325361051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef7f21a6-a9a1-4c60-96ed-2a8a43d6f171 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.325608370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef7f21a6-a9a1-4c60-96ed-2a8a43d6f171 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.373556797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67851abb-69f2-4d88-86f4-075e8597ac9c name=/runtime.v1.RuntimeService/Version
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.373627773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67851abb-69f2-4d88-86f4-075e8597ac9c name=/runtime.v1.RuntimeService/Version
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.375509630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b704cbb-274a-45f1-af8a-2ab6a32fa17c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.375984712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772599375960000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b704cbb-274a-45f1-af8a-2ab6a32fa17c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.376661355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=307d4042-3d39-46b1-981a-22a648fa6493 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.376807593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=307d4042-3d39-46b1-981a-22a648fa6493 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.377026758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=307d4042-3d39-46b1-981a-22a648fa6493 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.423621845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5dd74aa1-e5c1-4205-84f6-9a3b6e6d04c4 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.424253414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dd74aa1-e5c1-4205-84f6-9a3b6e6d04c4 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.425529395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94874024-1f7c-4ca5-aa50-0c82b422995a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.426887685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772599426781648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94874024-1f7c-4ca5-aa50-0c82b422995a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.428421375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e9e2448-9682-4292-8383-8efa172a4b29 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.428546689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e9e2448-9682-4292-8383-8efa172a4b29 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.429152229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e9e2448-9682-4292-8383-8efa172a4b29 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.458847307Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=026920ec-b78d-4d73-a2b2-9ec27a6dde61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.461098342Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772055927221143,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T14:27:35.616001564Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cd1d7296b3e9c169fa435007483a654752a6674aeccc31e3ffcb3fc7591f838,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9hjss,Uid:87eb7974-1ffa-40d4-bb06-4963e92e1c7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772055831058278,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9hjss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87eb7974-1ffa-40d4-bb06-4963e92e1c7f
,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:27:35.521980818Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&PodSandboxMetadata{Name:kube-proxy-qpxx5,Uid:a139949c-570d-438a-955a-03768aabf027,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772054235268302,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:27:33.627334362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-jk9v5,Uid
:15ff991e-2c6b-49ad-bc69-c427d1f24610,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772054220049797,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:27:33.908646874Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-xczpc,Uid:f09adcb8-dacb-4b1c-bbbf-9f056e89da3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772054160891961,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f09adcb8-dacb-4b1c-bbbf-9f056e89da3b,k8s-app: kube-dns,pod-templat
e-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:27:33.847015960Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-188109,Uid:ee4e9cc0c1e86cd72954ecefe8bf52f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772034598868426,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ee4e9cc0c1e86cd72954ecefe8bf52f4,kubernetes.io/config.seen: 2024-03-18T14:27:14.146950349Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-1
88109,Uid:9d27e2c5c45d3f09ac70ca190354fe58,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772034591271894,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.40:2379,kubernetes.io/config.hash: 9d27e2c5c45d3f09ac70ca190354fe58,kubernetes.io/config.seen: 2024-03-18T14:27:14.146941316Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-188109,Uid:71403531e1e1e87ee7c418a4eff2891a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772034589379536,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71403531e1e1e87ee7c418a4eff2891a,kubernetes.io/config.seen: 2024-03-18T14:27:14.146948701Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-188109,Uid:94e6ae07d8f2d405043ef052c391a762,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710772034587162933,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.40:8443,
kubernetes.io/config.hash: 94e6ae07d8f2d405043ef052c391a762,kubernetes.io/config.seen: 2024-03-18T14:27:14.146947117Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=026920ec-b78d-4d73-a2b2-9ec27a6dde61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.464456101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbd66b48-90d5-44f8-830f-5bdd80448b6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.464536253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbd66b48-90d5-44f8-830f-5bdd80448b6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:36:39 no-preload-188109 crio[695]: time="2024-03-18 14:36:39.465000511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbd66b48-90d5-44f8-830f-5bdd80448b6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ed6e4fe42941b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   9c48a3a8499e2       storage-provisioner
	f1116147cc6e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   bb476a5de0379       coredns-76f75df574-jk9v5
	15a4759ba2307       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9091510b0061b       coredns-76f75df574-xczpc
	7cdf1dc2f8458       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   1f70fb8e41ab0       kube-proxy-qpxx5
	1f877d2b9b5d6       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   2c27f2d1cf2bf       etcd-no-preload-188109
	cc83d2a0284ea       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   1a0cf825c7418       kube-scheduler-no-preload-188109
	281a4e15f2338       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   0f618e16932ba       kube-controller-manager-no-preload-188109
	ad416fceca513       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   14ebf104ef7ac       kube-apiserver-no-preload-188109
	
	
	==> coredns [15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-188109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-188109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=no-preload-188109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:27:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-188109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:36:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:32:46 +0000   Mon, 18 Mar 2024 14:27:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:32:46 +0000   Mon, 18 Mar 2024 14:27:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:32:46 +0000   Mon, 18 Mar 2024 14:27:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:32:46 +0000   Mon, 18 Mar 2024 14:27:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.40
	  Hostname:    no-preload-188109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1855d668dc4b229047ec8f42cf9f17
	  System UUID:                8a1855d6-68dc-4b22-9047-ec8f42cf9f17
	  Boot ID:                    d5473383-b39b-4bbe-b8c8-9a0dbd930d0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-jk9v5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-76f75df574-xczpc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-no-preload-188109                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-no-preload-188109             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-no-preload-188109    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-qpxx5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-188109             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-9hjss              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node no-preload-188109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node no-preload-188109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node no-preload-188109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-188109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-188109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-188109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m7s                   node-controller  Node no-preload-188109 event: Registered Node no-preload-188109 in Controller
	
	
	==> dmesg <==
	[  +0.055065] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043601] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.999535] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.918347] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.429258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.449600] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.059325] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077549] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.204870] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.167903] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.300661] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[Mar18 14:22] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.068417] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.262601] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +5.669539] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.522100] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 14:27] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.818253] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +4.727516] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.584602] systemd-fstab-generator[4133]: Ignoring "noauto" option for root device
	[ +12.986399] systemd-fstab-generator[4319]: Ignoring "noauto" option for root device
	[  +0.139683] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 14:28] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2] <==
	{"level":"info","ts":"2024-03-18T14:27:15.45248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd switched to configuration voters=(13175834472786694621)"}
	{"level":"info","ts":"2024-03-18T14:27:15.464436Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T14:27:15.46449Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.40:2380"}
	{"level":"info","ts":"2024-03-18T14:27:15.468922Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.40:2380"}
	{"level":"info","ts":"2024-03-18T14:27:15.477504Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1931abef9148948f","local-member-id":"b6d9f7a4f9cc11dd","added-peer-id":"b6d9f7a4f9cc11dd","added-peer-peer-urls":["https://192.168.61.40:2380"]}
	{"level":"info","ts":"2024-03-18T14:27:15.477789Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b6d9f7a4f9cc11dd","initial-advertise-peer-urls":["https://192.168.61.40:2380"],"listen-peer-urls":["https://192.168.61.40:2380"],"advertise-client-urls":["https://192.168.61.40:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.40:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T14:27:15.477844Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T14:27:15.966879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T14:27:15.966952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T14:27:15.966982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd received MsgPreVoteResp from b6d9f7a4f9cc11dd at term 1"}
	{"level":"info","ts":"2024-03-18T14:27:15.966997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.967003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd received MsgVoteResp from b6d9f7a4f9cc11dd at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.967013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd became leader at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.96702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6d9f7a4f9cc11dd elected leader b6d9f7a4f9cc11dd at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.971008Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b6d9f7a4f9cc11dd","local-member-attributes":"{Name:no-preload-188109 ClientURLs:[https://192.168.61.40:2379]}","request-path":"/0/members/b6d9f7a4f9cc11dd/attributes","cluster-id":"1931abef9148948f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:27:15.971228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:27:15.971778Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:27:15.971973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:27:15.97401Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1931abef9148948f","local-member-id":"b6d9f7a4f9cc11dd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:27:15.974106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:27:15.973658Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:27:15.974134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:27:15.975547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T14:27:15.979346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.40:2379"}
	{"level":"info","ts":"2024-03-18T14:27:15.997399Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 14:36:39 up 14 min,  0 users,  load average: 0.06, 0.20, 0.18
	Linux no-preload-188109 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9] <==
	I0318 14:30:36.350142       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:32:17.644777       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:17.645066       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 14:32:18.645490       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:18.645572       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:32:18.645583       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:32:18.645686       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:32:18.645887       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:32:18.646823       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:33:18.646259       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:33:18.646366       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:33:18.646376       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:33:18.647571       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:33:18.647654       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:33:18.647685       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:35:18.647571       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 14:35:18.647915       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:35:18.648134       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:35:18.648163       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 14:35:18.648226       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:35:18.649402       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645] <==
	I0318 14:31:03.325684       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:31:32.840511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:31:33.336351       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:32:02.847130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:32:03.349385       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:32:32.854011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:32:33.358233       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:33:02.860046       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:33:03.367548       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:33:32.866630       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:33:33.377115       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:33:34.146325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="334.407µs"
	I0318 14:33:48.145217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="110.597µs"
	E0318 14:34:02.872651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:34:03.386637       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:34:32.878908       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:34:33.395566       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:35:02.884458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:35:03.404991       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:35:32.891358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:35:33.413681       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:36:02.896648       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:36:03.423292       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:36:32.903206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:36:33.435742       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6] <==
	I0318 14:27:35.300112       1 server_others.go:72] "Using iptables proxy"
	I0318 14:27:35.384922       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.40"]
	I0318 14:27:35.643433       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 14:27:35.643520       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 14:27:35.643547       1 server_others.go:168] "Using iptables Proxier"
	I0318 14:27:35.653546       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 14:27:35.653926       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 14:27:35.654251       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:27:35.658942       1 config.go:188] "Starting service config controller"
	I0318 14:27:35.658994       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 14:27:35.659020       1 config.go:97] "Starting endpoint slice config controller"
	I0318 14:27:35.659024       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 14:27:35.659489       1 config.go:315] "Starting node config controller"
	I0318 14:27:35.659531       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 14:27:35.760252       1 shared_informer.go:318] Caches are synced for node config
	I0318 14:27:35.760330       1 shared_informer.go:318] Caches are synced for service config
	I0318 14:27:35.760365       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625] <==
	W0318 14:27:18.506124       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.506226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:18.640555       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 14:27:18.640667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 14:27:18.648601       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 14:27:18.648682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 14:27:18.658901       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 14:27:18.659564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 14:27:18.708662       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.709310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:18.745425       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 14:27:18.745575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 14:27:18.788058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 14:27:18.788172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 14:27:18.819684       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 14:27:18.819873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 14:27:18.846402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.846501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:18.880531       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 14:27:18.880768       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 14:27:18.939147       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.939431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:19.139145       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 14:27:19.139354       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 14:27:20.840228       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:34:21 no-preload-188109 kubelet[4140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:34:21 no-preload-188109 kubelet[4140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:34:21 no-preload-188109 kubelet[4140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:34:21 no-preload-188109 kubelet[4140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:34:29 no-preload-188109 kubelet[4140]: E0318 14:34:29.126756    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:34:44 no-preload-188109 kubelet[4140]: E0318 14:34:44.127186    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:34:57 no-preload-188109 kubelet[4140]: E0318 14:34:57.126813    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:35:08 no-preload-188109 kubelet[4140]: E0318 14:35:08.126667    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:35:20 no-preload-188109 kubelet[4140]: E0318 14:35:20.126550    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:35:21 no-preload-188109 kubelet[4140]: E0318 14:35:21.173981    4140 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:35:21 no-preload-188109 kubelet[4140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:35:21 no-preload-188109 kubelet[4140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:35:21 no-preload-188109 kubelet[4140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:35:21 no-preload-188109 kubelet[4140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:35:32 no-preload-188109 kubelet[4140]: E0318 14:35:32.126574    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:35:45 no-preload-188109 kubelet[4140]: E0318 14:35:45.131091    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:35:57 no-preload-188109 kubelet[4140]: E0318 14:35:57.126359    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:36:11 no-preload-188109 kubelet[4140]: E0318 14:36:11.127806    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:36:21 no-preload-188109 kubelet[4140]: E0318 14:36:21.172950    4140 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:36:21 no-preload-188109 kubelet[4140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:36:21 no-preload-188109 kubelet[4140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:36:21 no-preload-188109 kubelet[4140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:36:21 no-preload-188109 kubelet[4140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:36:22 no-preload-188109 kubelet[4140]: E0318 14:36:22.126387    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:36:33 no-preload-188109 kubelet[4140]: E0318 14:36:33.125814    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	
	
	==> storage-provisioner [ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b] <==
	I0318 14:27:36.170754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 14:27:36.182196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 14:27:36.182258       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 14:27:36.195214       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 14:27:36.195308       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe2db88c-b0a7-4f9b-a9db-6073f267d102", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-188109_9ff19ba4-0a18-4f37-a93c-ad8138b634cb became leader
	I0318 14:27:36.195667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-188109_9ff19ba4-0a18-4f37-a93c-ad8138b634cb!
	I0318 14:27:36.297047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-188109_9ff19ba4-0a18-4f37-a93c-ad8138b634cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188109 -n no-preload-188109
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-188109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9hjss
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-188109 describe pod metrics-server-57f55c9bc5-9hjss
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-188109 describe pod metrics-server-57f55c9bc5-9hjss: exit status 1 (69.261228ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9hjss" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-188109 describe pod metrics-server-57f55c9bc5-9hjss: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:30:12.747607 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:30:23.563475 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:30:32.237613 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:31:15.725469 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:31:25.565230 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:31:35.794114 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:31:55.282779 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:32:37.319546 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:32:38.769511 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:32:48.609795 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:33:10.147548 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:33:18.301873 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:34:00.517229 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:34:17.918841 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:35:12.747681 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:35:32.237824 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:36:15.725084 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:36:25.565202 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:37:20.372332 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:37:20.966562 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:37:37.319374 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:38:10.147055 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:38:18.301712 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (263.319865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-782728" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (259.373216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-782728 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-782728 logs -n 25: (1.594731891s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo find                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo find                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:17:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:24.228169 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:17:27.300185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:33.380164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:36.452102 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:42.536087 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:45.604211 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:51.684168 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:54.756227 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:00.836108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:03.908246 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:09.988223 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:13.060123 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:19.140179 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:22.212209 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:28.292206 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:31.364121 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:37.444195 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:40.516108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:46.596160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:49.668120 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:55.748134 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:58.820202 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:04.900183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:07.972128 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:14.052140 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:17.124242 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:23.204175 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:26.276172 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:32.356183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:35.428256 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:41.508181 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:44.580142 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:50.660193 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:53.732160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:59.812151 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:02.884164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:08.964174 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:12.036185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:18.116178 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:21.188147 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:27.268137 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:30.340177 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:33.345074 1128788 start.go:364] duration metric: took 4m12.599457373s to acquireMachinesLock for "embed-certs-767719"
	I0318 14:20:33.345136 1128788 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:33.345145 1128788 fix.go:54] fixHost starting: 
	I0318 14:20:33.345584 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:33.345638 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:33.362007 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0318 14:20:33.362504 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:33.363014 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:20:33.363037 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:33.363432 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:33.363634 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:33.363787 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:20:33.365593 1128788 fix.go:112] recreateIfNeeded on embed-certs-767719: state=Stopped err=<nil>
	I0318 14:20:33.365619 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	W0318 14:20:33.365792 1128788 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:33.367525 1128788 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767719" ...
	I0318 14:20:33.368930 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Start
	I0318 14:20:33.369145 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring networks are active...
	I0318 14:20:33.370041 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network default is active
	I0318 14:20:33.370474 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network mk-embed-certs-767719 is active
	I0318 14:20:33.370832 1128788 main.go:141] libmachine: (embed-certs-767719) Getting domain xml...
	I0318 14:20:33.371609 1128788 main.go:141] libmachine: (embed-certs-767719) Creating domain...
	I0318 14:20:34.596425 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting to get IP...
	I0318 14:20:34.597292 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.597677 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.597753 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.597666 1130210 retry.go:31] will retry after 244.312377ms: waiting for machine to come up
	I0318 14:20:34.843360 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.844039 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.844082 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.843988 1130210 retry.go:31] will retry after 388.782007ms: waiting for machine to come up
	I0318 14:20:35.234931 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.235304 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.235334 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.235252 1130210 retry.go:31] will retry after 449.871291ms: waiting for machine to come up
	I0318 14:20:33.342334 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:33.342408 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.342790 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:20:33.342823 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.343061 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:20:33.344920 1128583 machine.go:97] duration metric: took 4m37.408911801s to provisionDockerMachine
	I0318 14:20:33.344982 1128583 fix.go:56] duration metric: took 4m37.431584024s for fixHost
	I0318 14:20:33.344992 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 4m37.431613044s
	W0318 14:20:33.345017 1128583 start.go:713] error starting host: provision: host is not running
	W0318 14:20:33.345209 1128583 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 14:20:33.345223 1128583 start.go:728] Will try again in 5 seconds ...
	I0318 14:20:35.687048 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.687565 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.687604 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.687508 1130210 retry.go:31] will retry after 470.225551ms: waiting for machine to come up
	I0318 14:20:36.159138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.159642 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.159668 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.159590 1130210 retry.go:31] will retry after 638.634635ms: waiting for machine to come up
	I0318 14:20:36.799431 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.799820 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.799857 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.799764 1130210 retry.go:31] will retry after 758.659569ms: waiting for machine to come up
	I0318 14:20:37.559752 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:37.560189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:37.560224 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:37.560116 1130210 retry.go:31] will retry after 1.163344023s: waiting for machine to come up
	I0318 14:20:38.724981 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:38.725498 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:38.725561 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:38.725341 1130210 retry.go:31] will retry after 1.155934539s: waiting for machine to come up
	I0318 14:20:39.882622 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:39.883025 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:39.883074 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:39.882966 1130210 retry.go:31] will retry after 1.832023161s: waiting for machine to come up
	I0318 14:20:38.347296 1128583 start.go:360] acquireMachinesLock for no-preload-188109: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:20:41.717138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:41.717723 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:41.717757 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:41.717642 1130210 retry.go:31] will retry after 1.526824443s: waiting for machine to come up
	I0318 14:20:43.246389 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:43.246960 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:43.246997 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:43.246901 1130210 retry.go:31] will retry after 2.608273558s: waiting for machine to come up
	I0318 14:20:45.858375 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:45.858919 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:45.858943 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:45.858871 1130210 retry.go:31] will retry after 2.272908905s: waiting for machine to come up
	I0318 14:20:48.134345 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:48.134774 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:48.134826 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:48.134739 1130210 retry.go:31] will retry after 3.671073699s: waiting for machine to come up
	I0318 14:20:53.273198 1128964 start.go:364] duration metric: took 4m11.791347901s to acquireMachinesLock for "default-k8s-diff-port-075922"
	I0318 14:20:53.273284 1128964 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:53.273295 1128964 fix.go:54] fixHost starting: 
	I0318 14:20:53.273834 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:53.273879 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:53.291440 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0318 14:20:53.291988 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:53.292571 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:20:53.292605 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:53.292931 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:53.293125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:20:53.293278 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:20:53.294856 1128964 fix.go:112] recreateIfNeeded on default-k8s-diff-port-075922: state=Stopped err=<nil>
	I0318 14:20:53.294889 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	W0318 14:20:53.295063 1128964 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:53.297784 1128964 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075922" ...
	I0318 14:20:51.809859 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.810477 1128788 main.go:141] libmachine: (embed-certs-767719) Found IP for machine: 192.168.72.45
	I0318 14:20:51.810503 1128788 main.go:141] libmachine: (embed-certs-767719) Reserving static IP address...
	I0318 14:20:51.810518 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has current primary IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.811061 1128788 main.go:141] libmachine: (embed-certs-767719) Reserved static IP address: 192.168.72.45
	I0318 14:20:51.811104 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.811112 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting for SSH to be available...
	I0318 14:20:51.811137 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | skip adding static IP to network mk-embed-certs-767719 - found existing host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"}
	I0318 14:20:51.811163 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Getting to WaitForSSH function...
	I0318 14:20:51.813739 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814076 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.814121 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH client type: external
	I0318 14:20:51.814225 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa (-rw-------)
	I0318 14:20:51.814282 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:20:51.814327 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | About to run SSH command:
	I0318 14:20:51.814346 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | exit 0
	I0318 14:20:51.944192 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | SSH cmd err, output: <nil>: 
	I0318 14:20:51.944624 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetConfigRaw
	I0318 14:20:51.945477 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:51.948244 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.948667 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.948711 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.949069 1128788 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:20:51.949305 1128788 machine.go:94] provisionDockerMachine start ...
	I0318 14:20:51.949327 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:51.949596 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:51.952267 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952653 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.952703 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952836 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:51.953047 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953200 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953376 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:51.953525 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:51.953772 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:51.953785 1128788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:20:52.068806 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:20:52.068847 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069162 1128788 buildroot.go:166] provisioning hostname "embed-certs-767719"
	I0318 14:20:52.069198 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069500 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.072258 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072750 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.072785 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072939 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.073146 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073312 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073492 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.073730 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.073916 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.073934 1128788 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767719 && echo "embed-certs-767719" | sudo tee /etc/hostname
	I0318 14:20:52.204197 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767719
	
	I0318 14:20:52.204258 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.207520 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.207927 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.207959 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.208178 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.208478 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208740 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208961 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.209164 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.209352 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.209370 1128788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767719/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:20:52.337185 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:52.337220 1128788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:20:52.337243 1128788 buildroot.go:174] setting up certificates
	I0318 14:20:52.337253 1128788 provision.go:84] configureAuth start
	I0318 14:20:52.337264 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.337561 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:52.340693 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341061 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.341098 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341280 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.343239 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343570 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.343595 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343709 1128788 provision.go:143] copyHostCerts
	I0318 14:20:52.343782 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:20:52.343794 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:20:52.343888 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:20:52.344001 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:20:52.344010 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:20:52.344038 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:20:52.344095 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:20:52.344103 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:20:52.344126 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:20:52.344220 1128788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767719 san=[127.0.0.1 192.168.72.45 embed-certs-767719 localhost minikube]
	I0318 14:20:52.550241 1128788 provision.go:177] copyRemoteCerts
	I0318 14:20:52.550380 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:20:52.550433 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.553182 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553591 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.553626 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553824 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.554056 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.554241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.554392 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:52.645341 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:20:52.672476 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:20:52.698609 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:20:52.724434 1128788 provision.go:87] duration metric: took 387.165868ms to configureAuth
	I0318 14:20:52.724471 1128788 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:20:52.724727 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:20:52.724827 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.727323 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727700 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.727764 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727882 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.728098 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728443 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.728626 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.728859 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.728878 1128788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:20:53.012918 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:20:53.012959 1128788 machine.go:97] duration metric: took 1.063639009s to provisionDockerMachine
	I0318 14:20:53.012976 1128788 start.go:293] postStartSetup for "embed-certs-767719" (driver="kvm2")
	I0318 14:20:53.012990 1128788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:20:53.013039 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.013471 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:20:53.013505 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.016524 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.016929 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.016961 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.017153 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.017372 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.017582 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.017846 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.107977 1128788 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:20:53.113146 1128788 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:20:53.113184 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:20:53.113302 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:20:53.113423 1128788 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:20:53.113558 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:20:53.125166 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:53.152094 1128788 start.go:296] duration metric: took 139.099686ms for postStartSetup
	I0318 14:20:53.152147 1128788 fix.go:56] duration metric: took 19.807001958s for fixHost
	I0318 14:20:53.152194 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.155058 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155371 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.155401 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155643 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.155908 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156138 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156307 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.156536 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:53.156770 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:53.156786 1128788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:20:53.272998 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771653.240528844
	
	I0318 14:20:53.273029 1128788 fix.go:216] guest clock: 1710771653.240528844
	I0318 14:20:53.273046 1128788 fix.go:229] Guest: 2024-03-18 14:20:53.240528844 +0000 UTC Remote: 2024-03-18 14:20:53.15215228 +0000 UTC m=+272.563569050 (delta=88.376564ms)
	I0318 14:20:53.273075 1128788 fix.go:200] guest clock delta is within tolerance: 88.376564ms
	I0318 14:20:53.273083 1128788 start.go:83] releasing machines lock for "embed-certs-767719", held for 19.927965733s
	I0318 14:20:53.273118 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.273431 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:53.276309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276740 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.276768 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276958 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277493 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277716 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277806 1128788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:20:53.277851 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.277976 1128788 ssh_runner.go:195] Run: cat /version.json
	I0318 14:20:53.278002 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.280799 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.280853 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281234 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281263 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281289 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281518 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281616 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281767 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281850 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281945 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282028 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282090 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.282179 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.386584 1128788 ssh_runner.go:195] Run: systemctl --version
	I0318 14:20:53.393371 1128788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:20:53.547565 1128788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:20:53.554182 1128788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:20:53.554266 1128788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:20:53.573031 1128788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:20:53.573071 1128788 start.go:494] detecting cgroup driver to use...
	I0318 14:20:53.573197 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:20:53.591649 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:20:53.607279 1128788 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:20:53.607359 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:20:53.624327 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:20:53.640398 1128788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:20:53.759979 1128788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:20:53.931294 1128788 docker.go:233] disabling docker service ...
	I0318 14:20:53.931381 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:20:53.954433 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:20:53.969396 1128788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:20:54.107898 1128788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:20:54.241874 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:20:54.257748 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:20:54.278981 1128788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:20:54.279057 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.293329 1128788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:20:54.293390 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.304838 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.316646 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.328623 1128788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:20:54.340540 1128788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:20:54.352368 1128788 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:20:54.352433 1128788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:20:54.368965 1128788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:20:54.389268 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:54.511182 1128788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:20:54.657685 1128788 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:20:54.657798 1128788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:20:54.663591 1128788 start.go:562] Will wait 60s for crictl version
	I0318 14:20:54.663670 1128788 ssh_runner.go:195] Run: which crictl
	I0318 14:20:54.667903 1128788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:20:54.707961 1128788 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:20:54.708065 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.738240 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.773562 1128788 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:20:54.775286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:54.778784 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779228 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:54.779265 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779498 1128788 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:20:54.784575 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:54.799207 1128788 kubeadm.go:877] updating cluster {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:20:54.799380 1128788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:20:54.799440 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:54.839309 1128788 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:20:54.839387 1128788 ssh_runner.go:195] Run: which lz4
	I0318 14:20:54.844323 1128788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:20:54.850487 1128788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:20:54.850524 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:20:53.299380 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Start
	I0318 14:20:53.299595 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring networks are active...
	I0318 14:20:53.300497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network default is active
	I0318 14:20:53.300887 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network mk-default-k8s-diff-port-075922 is active
	I0318 14:20:53.301316 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Getting domain xml...
	I0318 14:20:53.302079 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Creating domain...
	I0318 14:20:54.607619 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting to get IP...
	I0318 14:20:54.608510 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609075 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.609050 1130331 retry.go:31] will retry after 282.377323ms: waiting for machine to come up
	I0318 14:20:54.892766 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893323 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.893259 1130331 retry.go:31] will retry after 264.840581ms: waiting for machine to come up
	I0318 14:20:55.160018 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160536 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160578 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.160460 1130331 retry.go:31] will retry after 402.458985ms: waiting for machine to come up
	I0318 14:20:55.564282 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564773 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.564727 1130331 retry.go:31] will retry after 382.70672ms: waiting for machine to come up
	I0318 14:20:55.949676 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950183 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950218 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.950122 1130331 retry.go:31] will retry after 676.466466ms: waiting for machine to come up
	I0318 14:20:56.798325 1128788 crio.go:444] duration metric: took 1.954051074s to copy over tarball
	I0318 14:20:56.798418 1128788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:20:59.431722 1128788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633260911s)
	I0318 14:20:59.431777 1128788 crio.go:451] duration metric: took 2.633417573s to extract the tarball
	I0318 14:20:59.431788 1128788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:20:59.476265 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:59.534130 1128788 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:20:59.534161 1128788 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:20:59.534173 1128788 kubeadm.go:928] updating node { 192.168.72.45 8443 v1.28.4 crio true true} ...
	I0318 14:20:59.534357 1128788 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:20:59.534499 1128788 ssh_runner.go:195] Run: crio config
	I0318 14:20:59.594778 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:20:59.594814 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:20:59.594831 1128788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:20:59.594894 1128788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767719 NodeName:embed-certs-767719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:20:59.595092 1128788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:20:59.595203 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:20:59.610298 1128788 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:20:59.610388 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:20:59.624050 1128788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 14:20:59.644283 1128788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:20:59.663987 1128788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0318 14:20:59.685379 1128788 ssh_runner.go:195] Run: grep 192.168.72.45	control-plane.minikube.internal$ /etc/hosts
	I0318 14:20:59.690360 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:59.705657 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:59.839158 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:20:59.857617 1128788 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719 for IP: 192.168.72.45
	I0318 14:20:59.857642 1128788 certs.go:194] generating shared ca certs ...
	I0318 14:20:59.857674 1128788 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:20:59.857839 1128788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:20:59.857882 1128788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:20:59.857893 1128788 certs.go:256] generating profile certs ...
	I0318 14:20:59.858006 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/client.key
	I0318 14:20:59.858061 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key.f59f641c
	I0318 14:20:59.858098 1128788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key
	I0318 14:20:59.858268 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:20:59.858301 1128788 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:20:59.858308 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:20:59.858331 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:20:59.858360 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:20:59.858382 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:20:59.858424 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:59.859110 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:20:59.901101 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:20:59.947010 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:20:59.990882 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:00.032358 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 14:21:00.070194 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:00.108670 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:00.137760 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:00.168481 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:00.199292 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:00.228315 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:00.257409 1128788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:00.277720 1128788 ssh_runner.go:195] Run: openssl version
	I0318 14:21:00.284138 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:00.296443 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302083 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302160 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.308748 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:00.322025 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:00.334654 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340319 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340404 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.347454 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:00.359627 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:00.371865 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377236 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377335 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.387041 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:00.404525 1128788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:00.412919 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:00.422577 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:00.434217 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:00.444535 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:00.452863 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:00.459979 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:00.467503 1128788 kubeadm.go:391] StartCluster: {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:00.467680 1128788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:00.467780 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.507833 1128788 cri.go:89] found id: ""
	I0318 14:21:00.507926 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:00.519958 1128788 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:00.519982 1128788 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:00.520011 1128788 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:00.520066 1128788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:00.532229 1128788 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:00.533479 1128788 kubeconfig.go:125] found "embed-certs-767719" server: "https://192.168.72.45:8443"
	I0318 14:21:00.536185 1128788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:00.548434 1128788 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.45
	I0318 14:21:00.548484 1128788 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:00.548498 1128788 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:00.548551 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.592096 1128788 cri.go:89] found id: ""
	I0318 14:21:00.592168 1128788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:00.610826 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:00.622294 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:00.622330 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:00.622386 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:00.633009 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:00.633089 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:20:56.628134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628708 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628747 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:56.628643 1130331 retry.go:31] will retry after 703.45784ms: waiting for machine to come up
	I0318 14:20:57.334203 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334666 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334702 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:57.334600 1130331 retry.go:31] will retry after 1.177266521s: waiting for machine to come up
	I0318 14:20:58.513803 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514452 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514485 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:58.514389 1130331 retry.go:31] will retry after 1.389627955s: waiting for machine to come up
	I0318 14:20:59.906109 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906663 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906750 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:59.906632 1130331 retry.go:31] will retry after 1.239662517s: waiting for machine to come up
	I0318 14:21:01.147929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148325 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:01.148248 1130331 retry.go:31] will retry after 2.183067358s: waiting for machine to come up
	I0318 14:21:00.644684 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:00.921213 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:00.921307 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:00.932412 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.943408 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:00.943481 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.955574 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:00.966416 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:00.966483 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:00.978014 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:00.993622 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:01.128726 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.331974 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.203164646s)
	I0318 14:21:02.332035 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.574592 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.686011 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.821189 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:02.821373 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.322200 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.822207 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.838586 1128788 api_server.go:72] duration metric: took 1.017395673s to wait for apiserver process to appear ...
	I0318 14:21:03.838622 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:03.838660 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.839282 1128788 api_server.go:269] stopped: https://192.168.72.45:8443/healthz: Get "https://192.168.72.45:8443/healthz": dial tcp 192.168.72.45:8443: connect: connection refused
	I0318 14:21:04.339675 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.333080 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333620 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333648 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:03.333583 1130331 retry.go:31] will retry after 2.259124316s: waiting for machine to come up
	I0318 14:21:05.594356 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594823 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:05.594754 1130331 retry.go:31] will retry after 2.492274875s: waiting for machine to come up
	I0318 14:21:07.054330 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:07.054373 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:07.054392 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.073841 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.073894 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.339285 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.345307 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.345340 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.838915 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.846722 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.846759 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:08.339409 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:08.344790 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:21:08.358050 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:08.358097 1128788 api_server.go:131] duration metric: took 4.519466088s to wait for apiserver health ...
	I0318 14:21:08.358121 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:21:08.358130 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:08.359982 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:08.361428 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:08.378195 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:08.409269 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:08.421874 1128788 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:08.421960 1128788 system_pods.go:61] "coredns-5dd5756b68-4dmw2" [324897fc-dd26-47f1-b8bc-4d2ed721a576] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:08.421971 1128788 system_pods.go:61] "etcd-embed-certs-767719" [df147cb8-989c-408d-ade8-547858a8c2bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:08.421982 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [82f7d170-3b3c-448c-b824-6d263c5c1128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:08.421989 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [cd4dd4f3-a727-4864-b0e9-a89758537de9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:08.422002 1128788 system_pods.go:61] "kube-proxy-mtx9w" [b46b48ff-e4c0-4595-82c4-7c0c86103262] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:08.422010 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [63774f42-c85e-467f-9bd3-0c78d44b2681] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:08.422022 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-jr9wp" [e40748e2-ebc3-4c4f-a9cc-01bbc7416f35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:08.422030 1128788 system_pods.go:61] "storage-provisioner" [1b51e6a7-2693-4d0b-b47e-ccbcb1e46424] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:08.422047 1128788 system_pods.go:74] duration metric: took 12.746875ms to wait for pod list to return data ...
	I0318 14:21:08.422058 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:08.432361 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:08.432461 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:08.432483 1128788 node_conditions.go:105] duration metric: took 10.415127ms to run NodePressure ...
	I0318 14:21:08.432524 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:08.730544 1128788 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:08.735970 1128788 kubeadm.go:733] kubelet initialised
	I0318 14:21:08.736001 1128788 kubeadm.go:734] duration metric: took 5.422027ms waiting for restarted kubelet to initialise ...
	I0318 14:21:08.736042 1128788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:08.745586 1128788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:08.090446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090834 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:08.090779 1130331 retry.go:31] will retry after 3.31085892s: waiting for machine to come up
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:11.405935 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has current primary IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406539 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Found IP for machine: 192.168.83.39
	I0318 14:21:11.406553 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserving static IP address...
	I0318 14:21:11.407015 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.407048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | skip adding static IP to network mk-default-k8s-diff-port-075922 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"}
	I0318 14:21:11.407066 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserved static IP address: 192.168.83.39
	I0318 14:21:11.407081 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for SSH to be available...
	I0318 14:21:11.407093 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Getting to WaitForSSH function...
	I0318 14:21:11.409327 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409674 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.409706 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409895 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH client type: external
	I0318 14:21:11.409919 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa (-rw-------)
	I0318 14:21:11.410034 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:11.410065 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | About to run SSH command:
	I0318 14:21:11.410089 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | exit 0
	I0318 14:21:11.544258 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:11.544698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetConfigRaw
	I0318 14:21:11.545370 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.548333 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.548729 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.548764 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.549053 1128964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:21:11.549275 1128964 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:11.549295 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:11.549533 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.551799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552156 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.552186 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552280 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.552482 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552657 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552797 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.552994 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.553261 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.553278 1128964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:11.665093 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:11.665132 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665456 1128964 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075922"
	I0318 14:21:11.665493 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665730 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.668911 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.669413 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669679 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.669923 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670319 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.670530 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.670718 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.670734 1128964 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075922 && echo "default-k8s-diff-port-075922" | sudo tee /etc/hostname
	I0318 14:21:11.807520 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075922
	
	I0318 14:21:11.807552 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.810614 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811011 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.811047 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811257 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.811480 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811941 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.812155 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.812361 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.812387 1128964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075922/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:11.942984 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:11.943022 1128964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:11.943078 1128964 buildroot.go:174] setting up certificates
	I0318 14:21:11.943094 1128964 provision.go:84] configureAuth start
	I0318 14:21:11.943108 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.943441 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.946659 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947091 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.947125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947328 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.949852 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950275 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.950310 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950496 1128964 provision.go:143] copyHostCerts
	I0318 14:21:11.950579 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:11.950596 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:11.950679 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:11.950859 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:11.950873 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:11.950898 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:11.950964 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:11.950971 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:11.950988 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:11.951041 1128964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075922 san=[127.0.0.1 192.168.83.39 default-k8s-diff-port-075922 localhost minikube]
	I0318 14:21:12.019678 1128964 provision.go:177] copyRemoteCerts
	I0318 14:21:12.019756 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:12.019788 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.023122 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023603 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.023639 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023862 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.024077 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.024294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.024445 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.112914 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:12.142575 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 14:21:12.171747 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:12.200144 1128964 provision.go:87] duration metric: took 257.034667ms to configureAuth
	I0318 14:21:12.200177 1128964 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:12.200401 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:21:12.200515 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.203573 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.203978 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.204019 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.204160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.204379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204658 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.205131 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.205335 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.205367 1128964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:12.494965 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:12.494997 1128964 machine.go:97] duration metric: took 945.707691ms to provisionDockerMachine
	I0318 14:21:12.495012 1128964 start.go:293] postStartSetup for "default-k8s-diff-port-075922" (driver="kvm2")
	I0318 14:21:12.495026 1128964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:12.495048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.495450 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:12.495486 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.498444 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.498821 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498928 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.499166 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.499363 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.499560 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.588350 1128964 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:12.593611 1128964 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:12.593638 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:12.593714 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:12.593788 1128964 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:12.593875 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:12.605751 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:12.633577 1128964 start.go:296] duration metric: took 138.54984ms for postStartSetup
	I0318 14:21:12.633621 1128964 fix.go:56] duration metric: took 19.360327899s for fixHost
	I0318 14:21:12.633645 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.636446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636822 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.636850 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636989 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.637237 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637428 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637596 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.637786 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.637988 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.638002 1128964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:12.749326 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771672.727120819
	
	I0318 14:21:12.749355 1128964 fix.go:216] guest clock: 1710771672.727120819
	I0318 14:21:12.749364 1128964 fix.go:229] Guest: 2024-03-18 14:21:12.727120819 +0000 UTC Remote: 2024-03-18 14:21:12.633625447 +0000 UTC m=+271.308784721 (delta=93.495372ms)
	I0318 14:21:12.749386 1128964 fix.go:200] guest clock delta is within tolerance: 93.495372ms
	I0318 14:21:12.749392 1128964 start.go:83] releasing machines lock for "default-k8s-diff-port-075922", held for 19.476136638s
	I0318 14:21:12.749418 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.749732 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:12.752996 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753471 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.753506 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753815 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754448 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754651 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754744 1128964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:12.754791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.754943 1128964 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:12.754970 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.758153 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758303 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758628 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758660 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758694 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758758 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758927 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758988 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759057 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759157 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759251 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759292 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.759371 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.841423 1128964 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:12.868154 1128964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:13.020652 1128964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:13.028168 1128964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:13.028267 1128964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:13.047225 1128964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:13.047264 1128964 start.go:494] detecting cgroup driver to use...
	I0318 14:21:13.047361 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:13.064518 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:13.080271 1128964 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:13.080356 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:13.095583 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:13.110387 1128964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:13.250934 1128964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:13.450657 1128964 docker.go:233] disabling docker service ...
	I0318 14:21:13.450738 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:13.471701 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:13.488157 1128964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:13.644961 1128964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:13.811333 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:13.828584 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:13.852476 1128964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:13.852557 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.864849 1128964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:13.864951 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.877723 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.890337 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.902558 1128964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:13.915858 1128964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:13.928426 1128964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:13.928526 1128964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:13.951761 1128964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:13.964785 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:14.144432 1128964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:14.311928 1128964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:14.312078 1128964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:14.319279 1128964 start.go:562] Will wait 60s for crictl version
	I0318 14:21:14.319347 1128964 ssh_runner.go:195] Run: which crictl
	I0318 14:21:14.325325 1128964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:14.385244 1128964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:14.385344 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.426242 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.460725 1128964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:21:10.753176 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:12.756558 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:13.760252 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:13.760295 1128788 pod_ready.go:81] duration metric: took 5.014671723s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:13.760315 1128788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:14.461974 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:14.465116 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465652 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:14.465696 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465903 1128964 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:14.471039 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:14.486098 1128964 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:14.486307 1128964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:21:14.486379 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:14.526373 1128964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:21:14.526476 1128964 ssh_runner.go:195] Run: which lz4
	I0318 14:21:14.531145 1128964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:14.536370 1128964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:14.536412 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:21:15.769517 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:17.772721 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:18.769552 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:18.769590 1128788 pod_ready.go:81] duration metric: took 5.009265127s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:18.769610 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:16.505094 1128964 crio.go:444] duration metric: took 1.973996063s to copy over tarball
	I0318 14:21:16.505250 1128964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:19.251009 1128964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.745717226s)
	I0318 14:21:19.251045 1128964 crio.go:451] duration metric: took 2.745895394s to extract the tarball
	I0318 14:21:19.251053 1128964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:19.308392 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:19.363143 1128964 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:21:19.363172 1128964 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:21:19.363181 1128964 kubeadm.go:928] updating node { 192.168.83.39 8444 v1.28.4 crio true true} ...
	I0318 14:21:19.363313 1128964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-075922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:19.363415 1128964 ssh_runner.go:195] Run: crio config
	I0318 14:21:19.415995 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:19.416028 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:19.416048 1128964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:19.416085 1128964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075922 NodeName:default-k8s-diff-port-075922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:21:19.416297 1128964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-075922"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:19.416379 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:21:19.427340 1128964 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:19.427420 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:19.438470 1128964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0318 14:21:19.459945 1128964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:19.479728 1128964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0318 14:21:19.500079 1128964 ssh_runner.go:195] Run: grep 192.168.83.39	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:19.504746 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:19.519931 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:19.654822 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:19.675414 1128964 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922 for IP: 192.168.83.39
	I0318 14:21:19.675443 1128964 certs.go:194] generating shared ca certs ...
	I0318 14:21:19.675462 1128964 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:19.675647 1128964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:19.675707 1128964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:19.675722 1128964 certs.go:256] generating profile certs ...
	I0318 14:21:19.675861 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/client.key
	I0318 14:21:19.683399 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key.675162fd
	I0318 14:21:19.683522 1128964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key
	I0318 14:21:19.683667 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:19.683715 1128964 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:19.683730 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:19.683782 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:19.683870 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:19.683897 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:19.683940 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:19.684679 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:19.743065 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:19.787963 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:19.833491 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:19.865359 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 14:21:19.903294 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:19.932298 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:19.961860 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 14:21:19.992150 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:20.020750 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:20.047780 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:20.074566 1128964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:20.094524 1128964 ssh_runner.go:195] Run: openssl version
	I0318 14:21:20.101181 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:20.118970 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124628 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124707 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.133462 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:20.150447 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:20.165864 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173488 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173627 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.183147 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:20.200417 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:20.213973 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219407 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219488 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.226491 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:20.240299 1128964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:20.245960 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:20.253073 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:20.260144 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:20.267546 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:20.274740 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:20.282502 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:20.289722 1128964 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:20.289817 1128964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:20.289877 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.338941 1128964 cri.go:89] found id: ""
	I0318 14:21:20.339036 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:20.350677 1128964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:20.350706 1128964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:20.350718 1128964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:20.350775 1128964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:20.362216 1128964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:20.363622 1128964 kubeconfig.go:125] found "default-k8s-diff-port-075922" server: "https://192.168.83.39:8444"
	I0318 14:21:20.366606 1128964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:20.379417 1128964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.39
	I0318 14:21:20.379460 1128964 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:20.379481 1128964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:20.379556 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.423139 1128964 cri.go:89] found id: ""
	I0318 14:21:20.423224 1128964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:20.444111 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:20.456698 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:20.456725 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:20.456787 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:21:20.467432 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:20.467502 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:20.478894 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:21:20.490123 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:20.490216 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:20.501744 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.514020 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:20.514084 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.526805 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:21:20.538374 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:20.538452 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:20.550880 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:20.562302 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:20.687288 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.085960 1128788 pod_ready.go:102] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:21.781260 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.781287 1128788 pod_ready.go:81] duration metric: took 3.011668835s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.781297 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789501 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.789537 1128788 pod_ready.go:81] duration metric: took 8.231402ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789552 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797445 1128788 pod_ready.go:92] pod "kube-proxy-mtx9w" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.797483 1128788 pod_ready.go:81] duration metric: took 7.921289ms for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797496 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804084 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.804120 1128788 pod_ready.go:81] duration metric: took 6.613559ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804132 1128788 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:23.812751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:21.432193 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.821781 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.899411 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.984494 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:21.984624 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.484985 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.985119 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:23.009700 1128964 api_server.go:72] duration metric: took 1.025195346s to wait for apiserver process to appear ...
	I0318 14:21:23.009739 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:23.009764 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:23.010328 1128964 api_server.go:269] stopped: https://192.168.83.39:8444/healthz: Get "https://192.168.83.39:8444/healthz": dial tcp 192.168.83.39:8444: connect: connection refused
	I0318 14:21:23.510606 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.307173 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.307217 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.307238 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.345507 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.345551 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.510350 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.515684 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:26.515721 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.010509 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.015492 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:27.015526 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.510772 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.520209 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:21:27.527945 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:27.527978 1128964 api_server.go:131] duration metric: took 4.518232257s to wait for apiserver health ...
	I0318 14:21:27.527988 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:27.527994 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:27.529779 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:26.313296 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:28.811916 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:27.531259 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:27.573198 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:27.619813 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:27.629766 1128964 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:27.629805 1128964 system_pods.go:61] "coredns-5dd5756b68-dsrcd" [86ac331d-2539-4fbb-8cf8-56f58afa6f99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:27.629815 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [0de3bd3b-6ee2-46e2-83f7-7c637115879f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:27.629821 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [e1e689c8-642c-428e-bddf-43c2c1524563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:27.629832 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [1a200d0f-53e6-4e44-a8b0-28b9d21f763e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:27.629837 1128964 system_pods.go:61] "kube-proxy-wbnvd" [6bf13050-a150-4133-93e2-71ddcad443ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:27.629842 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [87bc17b3-75c6-4d6b-9b8f-29823398100a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:27.629847 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-4vrvb" [d12dc531-720c-4a7a-93af-69b9005666fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:27.629852 1128964 system_pods.go:61] "storage-provisioner" [856896cd-daec-4873-8f9c-c7cadeb3c16e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:27.629857 1128964 system_pods.go:74] duration metric: took 10.000416ms to wait for pod list to return data ...
	I0318 14:21:27.629866 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:27.634112 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:27.634147 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:27.634159 1128964 node_conditions.go:105] duration metric: took 4.287491ms to run NodePressure ...
	I0318 14:21:27.634190 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:27.976277 1128964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980894 1128964 kubeadm.go:733] kubelet initialised
	I0318 14:21:27.980920 1128964 kubeadm.go:734] duration metric: took 4.609836ms waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980932 1128964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:27.986151 1128964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:29.993963 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:31.313401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:33.811753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:36.252999 1128583 start.go:364] duration metric: took 57.905644198s to acquireMachinesLock for "no-preload-188109"
	I0318 14:21:36.253054 1128583 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:36.253063 1128583 fix.go:54] fixHost starting: 
	I0318 14:21:36.253510 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:36.253545 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:36.271856 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0318 14:21:36.272254 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:36.272790 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:21:36.272822 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:36.273237 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:36.273446 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:36.273614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:21:36.275414 1128583 fix.go:112] recreateIfNeeded on no-preload-188109: state=Stopped err=<nil>
	I0318 14:21:36.275440 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	W0318 14:21:36.275623 1128583 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:36.277528 1128583 out.go:177] * Restarting existing kvm2 VM for "no-preload-188109" ...
	I0318 14:21:31.995770 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.495078 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:35.812465 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:37.820263 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.313683 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.278973 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Start
	I0318 14:21:36.279160 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring networks are active...
	I0318 14:21:36.280043 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network default is active
	I0318 14:21:36.280495 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network mk-no-preload-188109 is active
	I0318 14:21:36.281014 1128583 main.go:141] libmachine: (no-preload-188109) Getting domain xml...
	I0318 14:21:36.281995 1128583 main.go:141] libmachine: (no-preload-188109) Creating domain...
	I0318 14:21:37.644409 1128583 main.go:141] libmachine: (no-preload-188109) Waiting to get IP...
	I0318 14:21:37.645406 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.645958 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.646047 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.645922 1130597 retry.go:31] will retry after 223.965782ms: waiting for machine to come up
	I0318 14:21:37.871397 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.871933 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.871971 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.871882 1130597 retry.go:31] will retry after 272.743353ms: waiting for machine to come up
	I0318 14:21:38.146680 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.147278 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.147309 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.147211 1130597 retry.go:31] will retry after 414.468616ms: waiting for machine to come up
	I0318 14:21:38.563199 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.563768 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.563794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.563718 1130597 retry.go:31] will retry after 582.588791ms: waiting for machine to come up
	I0318 14:21:39.147611 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.148410 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.148436 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.148315 1130597 retry.go:31] will retry after 686.425224ms: waiting for machine to come up
	I0318 14:21:39.836964 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.837647 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.837677 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.837593 1130597 retry.go:31] will retry after 878.564369ms: waiting for machine to come up
	I0318 14:21:40.717644 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:40.718346 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:40.718380 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:40.718276 1130597 retry.go:31] will retry after 1.183201382s: waiting for machine to come up
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:36.995480 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:39.005382 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.495300 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.495345 1128964 pod_ready.go:81] duration metric: took 12.509166884s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.495358 1128964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504432 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.504467 1128964 pod_ready.go:81] duration metric: took 9.100778ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504480 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515466 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.515506 1128964 pod_ready.go:81] duration metric: took 11.017212ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515519 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525891 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.525929 1128964 pod_ready.go:81] duration metric: took 10.399892ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525943 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534161 1128964 pod_ready.go:92] pod "kube-proxy-wbnvd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.534196 1128964 pod_ready.go:81] duration metric: took 8.245545ms for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534208 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:42.314504 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:44.812532 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:41.902972 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:41.903707 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:41.903736 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:41.903670 1130597 retry.go:31] will retry after 1.282612289s: waiting for machine to come up
	I0318 14:21:43.188745 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:43.189303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:43.189332 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:43.189257 1130597 retry.go:31] will retry after 1.175485401s: waiting for machine to come up
	I0318 14:21:44.366602 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:44.367162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:44.367191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:44.367121 1130597 retry.go:31] will retry after 1.700678954s: waiting for machine to come up
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:41.690971 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:41.691009 1128964 pod_ready.go:81] duration metric: took 1.156786821s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:41.691020 1128964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:44.189110 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.201644 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.813954 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:48.817402 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.069196 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:46.069747 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:46.069797 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:46.069687 1130597 retry.go:31] will retry after 2.354521412s: waiting for machine to come up
	I0318 14:21:48.425714 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:48.426186 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:48.426219 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:48.426147 1130597 retry.go:31] will retry after 2.74319235s: waiting for machine to come up
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.699275 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:50.699690 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.311999 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:53.811066 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.173264 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.173844 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:51.173880 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:51.173784 1130597 retry.go:31] will retry after 4.489599719s: waiting for machine to come up
	I0318 14:21:55.665080 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665639 1128583 main.go:141] libmachine: (no-preload-188109) Found IP for machine: 192.168.61.40
	I0318 14:21:55.665675 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has current primary IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665686 1128583 main.go:141] libmachine: (no-preload-188109) Reserving static IP address...
	I0318 14:21:55.666111 1128583 main.go:141] libmachine: (no-preload-188109) Reserved static IP address: 192.168.61.40
	I0318 14:21:55.666149 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.666164 1128583 main.go:141] libmachine: (no-preload-188109) Waiting for SSH to be available...
	I0318 14:21:55.666191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | skip adding static IP to network mk-no-preload-188109 - found existing host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"}
	I0318 14:21:55.666205 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Getting to WaitForSSH function...
	I0318 14:21:55.668473 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668792 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.668837 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668947 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH client type: external
	I0318 14:21:55.668989 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa (-rw-------)
	I0318 14:21:55.669020 1128583 main.go:141] libmachine: (no-preload-188109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:55.669043 1128583 main.go:141] libmachine: (no-preload-188109) DBG | About to run SSH command:
	I0318 14:21:55.669095 1128583 main.go:141] libmachine: (no-preload-188109) DBG | exit 0
	I0318 14:21:55.796228 1128583 main.go:141] libmachine: (no-preload-188109) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:55.796668 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetConfigRaw
	I0318 14:21:55.797378 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:55.800241 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.800716 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.800771 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.801150 1128583 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:21:55.801416 1128583 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:55.801441 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:55.801690 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.804667 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.700698 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.198658 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.805029 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.805269 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.806759 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.806983 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807220 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807421 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.807623 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.807952 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.807982 1128583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:55.920939 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:55.920993 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921259 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:21:55.921292 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921510 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.924430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.924921 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.924962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.925153 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.925431 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925792 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.926029 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.926301 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.926320 1128583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-188109 && echo "no-preload-188109" | sudo tee /etc/hostname
	I0318 14:21:56.051873 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-188109
	
	I0318 14:21:56.051915 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.055015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055387 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.055422 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055659 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.055887 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056058 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056190 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.056318 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.056508 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.056525 1128583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188109/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:56.178366 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:56.178401 1128583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:56.178443 1128583 buildroot.go:174] setting up certificates
	I0318 14:21:56.178454 1128583 provision.go:84] configureAuth start
	I0318 14:21:56.178465 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:56.178859 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:56.181995 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.182457 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182724 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.185337 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185623 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.185649 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185880 1128583 provision.go:143] copyHostCerts
	I0318 14:21:56.185968 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:56.185983 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:56.186073 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:56.186249 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:56.186264 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:56.186296 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:56.186392 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:56.186406 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:56.186432 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:56.186511 1128583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.no-preload-188109 san=[127.0.0.1 192.168.61.40 localhost minikube no-preload-188109]
	I0318 14:21:56.332196 1128583 provision.go:177] copyRemoteCerts
	I0318 14:21:56.332267 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:56.332295 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.335310 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335604 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.335639 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335787 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.336002 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.336170 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.336310 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.427529 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:56.459132 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:21:56.488690 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:56.516043 1128583 provision.go:87] duration metric: took 337.568576ms to configureAuth
	I0318 14:21:56.516088 1128583 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:56.516309 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:21:56.516457 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.519576 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.519998 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.520059 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.520237 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.520460 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520677 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520876 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.521065 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.521290 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.521307 1128583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:56.831034 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:56.831076 1128583 machine.go:97] duration metric: took 1.029643209s to provisionDockerMachine
	I0318 14:21:56.831092 1128583 start.go:293] postStartSetup for "no-preload-188109" (driver="kvm2")
	I0318 14:21:56.831107 1128583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:56.831126 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:56.831549 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:56.831611 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.834520 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.834962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.834992 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.835234 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.835415 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.835582 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.835743 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.927694 1128583 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:56.932973 1128583 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:56.933002 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:56.933088 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:56.933200 1128583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:56.933345 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:56.943594 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:56.971483 1128583 start.go:296] duration metric: took 140.368525ms for postStartSetup
	I0318 14:21:56.971564 1128583 fix.go:56] duration metric: took 20.718501273s for fixHost
	I0318 14:21:56.971618 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.974721 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975185 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.975250 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975409 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.975679 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.975885 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.976049 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.976242 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.976438 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.976453 1128583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:57.089795 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771717.066528661
	
	I0318 14:21:57.089823 1128583 fix.go:216] guest clock: 1710771717.066528661
	I0318 14:21:57.089834 1128583 fix.go:229] Guest: 2024-03-18 14:21:57.066528661 +0000 UTC Remote: 2024-03-18 14:21:56.971568576 +0000 UTC m=+361.214853207 (delta=94.960085ms)
	I0318 14:21:57.089865 1128583 fix.go:200] guest clock delta is within tolerance: 94.960085ms
	I0318 14:21:57.089873 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 20.836840869s
	I0318 14:21:57.089898 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.090297 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:57.094015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094517 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.094563 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094920 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095607 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095844 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095978 1128583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:57.096034 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.096182 1128583 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:57.096221 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.099303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099329 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099754 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099854 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099869 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.100103 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100118 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100339 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100568 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100578 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100766 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.100781 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.203060 1128583 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:57.209943 1128583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:57.368686 1128583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:57.376289 1128583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:57.376375 1128583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:57.394365 1128583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:57.394405 1128583 start.go:494] detecting cgroup driver to use...
	I0318 14:21:57.394488 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:57.412172 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:57.428895 1128583 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:57.428988 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:57.445064 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:57.461255 1128583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:57.596381 1128583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:57.774782 1128583 docker.go:233] disabling docker service ...
	I0318 14:21:57.774890 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:57.791820 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:57.807412 1128583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:57.961890 1128583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:58.118122 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:58.133994 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:58.155336 1128583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:58.155429 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.167537 1128583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:58.167642 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.180814 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.193997 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.206817 1128583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:58.220843 1128583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:58.232012 1128583 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:58.232073 1128583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:58.246610 1128583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:58.260393 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:58.416723 1128583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:58.588776 1128583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:58.588864 1128583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:58.594689 1128583 start.go:562] Will wait 60s for crictl version
	I0318 14:21:58.594787 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:58.599287 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:58.634954 1128583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:58.635059 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.667031 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.703316 1128583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:21:55.812079 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:57.813027 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.310988 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:58.704763 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:58.708030 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708495 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:58.708527 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708738 1128583 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:58.713408 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:58.726934 1128583 kubeadm.go:877] updating cluster {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:58.727067 1128583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:21:58.727105 1128583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:58.764875 1128583 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:21:58.764904 1128583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:58.764976 1128583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.765019 1128583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.765091 1128583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.765117 1128583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.765142 1128583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.765158 1128583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.765125 1128583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.765098 1128583 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766495 1128583 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766589 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.766592 1128583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.766768 1128583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.766924 1128583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.766492 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.919274 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 14:21:58.934955 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.945887 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.954907 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.961334 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.976485 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.991515 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.100572 1128583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 14:21:59.100624 1128583 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.100684 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.125681 1128583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 14:21:59.125740 1128583 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.125799 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.138461 1128583 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 14:21:59.138521 1128583 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.138579 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149655 1128583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 14:21:59.149697 1128583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.149763 1128583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149803 1128583 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.149831 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.149839 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149790 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149875 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.231815 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.231851 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 14:21:59.231959 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:21:59.232052 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.232060 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.232064 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.232148 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.317997 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 14:21:59.318029 1128583 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318083 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318116 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318158 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318213 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318240 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.318246 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 14:21:59.318252 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318281 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 14:21:59.318315 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.364549 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.703771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.200048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:02.313802 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.812944 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:03.246360 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.928017963s)
	I0318 14:22:03.246414 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246364 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.928251379s)
	I0318 14:22:03.246429 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 14:22:03.246439 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.92820974s)
	I0318 14:22:03.246454 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246468 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246415 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.928141711s)
	I0318 14:22:03.246512 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246515 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246516 1128583 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.88192635s)
	I0318 14:22:03.246587 1128583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 14:22:03.246641 1128583 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:03.246704 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.203222 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.699887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.813730 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.311491 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.317600 1128583 ssh_runner.go:235] Completed: which crictl: (3.070863461s)
	I0318 14:22:06.317700 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:06.317775 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.071235517s)
	I0318 14:22:06.317805 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 14:22:06.317837 1128583 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.317907 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.370328 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 14:22:06.370435 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.243401402s)
	I0318 14:22:08.613903 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.295918452s)
	I0318 14:22:08.613917 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 14:22:08.613941 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:08.613994 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.199978 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.200394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.312752 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:13.812826 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.076840 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462814214s)
	I0318 14:22:11.076881 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 14:22:11.076917 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:11.076968 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:13.332851 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.25584847s)
	I0318 14:22:13.332896 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 14:22:13.332932 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:13.333002 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:14.705785 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.372744893s)
	I0318 14:22:14.705843 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 14:22:14.705881 1128583 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:14.705945 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:15.467380 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 14:22:15.467432 1128583 cache_images.go:123] Successfully loaded all cached images
	I0318 14:22:15.467439 1128583 cache_images.go:92] duration metric: took 16.702522125s to LoadCachedImages
	I0318 14:22:15.467456 1128583 kubeadm.go:928] updating node { 192.168.61.40 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:22:15.467619 1128583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-188109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:22:15.467790 1128583 ssh_runner.go:195] Run: crio config
	I0318 14:22:15.520678 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:15.520705 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:15.520718 1128583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:22:15.520740 1128583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.40 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188109 NodeName:no-preload-188109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:22:15.520893 1128583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188109"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:22:15.520965 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:22:15.534187 1128583 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:22:15.534260 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:22:15.546509 1128583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 14:22:15.567029 1128583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:22:15.586866 1128583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 14:22:15.609161 1128583 ssh_runner.go:195] Run: grep 192.168.61.40	control-plane.minikube.internal$ /etc/hosts
	I0318 14:22:15.614800 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:22:15.630088 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:22:15.754729 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:22:15.774062 1128583 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109 for IP: 192.168.61.40
	I0318 14:22:15.774093 1128583 certs.go:194] generating shared ca certs ...
	I0318 14:22:15.774114 1128583 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:22:15.774374 1128583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:22:15.774434 1128583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:22:15.774448 1128583 certs.go:256] generating profile certs ...
	I0318 14:22:15.774537 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/client.key
	I0318 14:22:15.774607 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key.8d4024a9
	I0318 14:22:15.774652 1128583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key
	I0318 14:22:15.774833 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:22:15.774871 1128583 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:22:15.774882 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:22:15.774926 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:22:15.774972 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:22:15.775031 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:22:15.775106 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:22:15.775902 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.698561 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:14.199200 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.200026 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.312392 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:18.812463 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:15.821418 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:22:15.874044 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:22:15.910814 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:22:15.965889 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:22:16.001003 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:22:16.030033 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:22:16.060519 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:22:16.089952 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:22:16.119397 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:22:16.150036 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:22:16.179489 1128583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:22:16.201823 1128583 ssh_runner.go:195] Run: openssl version
	I0318 14:22:16.208496 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:22:16.222723 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228161 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228239 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.234994 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:22:16.248672 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:22:16.262626 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268255 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268361 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.274868 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:22:16.287251 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:22:16.299690 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304633 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304718 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.311230 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:22:16.325483 1128583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:22:16.331012 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:22:16.338731 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:22:16.346289 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:22:16.353403 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:22:16.359967 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:22:16.367151 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:22:16.373719 1128583 kubeadm.go:391] StartCluster: {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:22:16.373823 1128583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:22:16.373921 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.417874 1128583 cri.go:89] found id: ""
	I0318 14:22:16.417957 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:22:16.431026 1128583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:22:16.431057 1128583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:22:16.431065 1128583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:22:16.431125 1128583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:22:16.445445 1128583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:22:16.446576 1128583 kubeconfig.go:125] found "no-preload-188109" server: "https://192.168.61.40:8443"
	I0318 14:22:16.449104 1128583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:22:16.461001 1128583 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.40
	I0318 14:22:16.461042 1128583 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:22:16.461056 1128583 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:22:16.461104 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.502356 1128583 cri.go:89] found id: ""
	I0318 14:22:16.502437 1128583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:22:16.525636 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:22:16.538600 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:22:16.538626 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:22:16.538677 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:22:16.550720 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:22:16.550803 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:22:16.562585 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:22:16.573439 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:22:16.573502 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:22:16.585548 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.596619 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:22:16.596706 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.608458 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:22:16.619498 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:22:16.619587 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:22:16.631359 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:22:16.643420 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:16.765437 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:17.862932 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097434993s)
	I0318 14:22:17.862980 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.097197 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.168390 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.295118 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:22:18.295225 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.795897 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.295431 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.335088 1128583 api_server.go:72] duration metric: took 1.039967082s to wait for apiserver process to appear ...
	I0318 14:22:19.335128 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:22:19.335163 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:19.335912 1128583 api_server.go:269] stopped: https://192.168.61.40:8443/healthz: Get "https://192.168.61.40:8443/healthz": dial tcp 192.168.61.40:8443: connect: connection refused
	I0318 14:22:19.836266 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.699537 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:21.199910 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:22.338349 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.338383 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.338402 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.351154 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.351190 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.835446 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.841044 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:22.841092 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.335665 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.347092 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.347126 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.835731 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.840517 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.840559 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:24.336151 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:24.340981 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:22:24.354524 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:22:24.354560 1128583 api_server.go:131] duration metric: took 5.019424083s to wait for apiserver health ...
	I0318 14:22:24.354570 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:24.354576 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:24.356602 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:22:20.818751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:23.312003 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:24.358089 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:22:24.375159 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:22:24.426409 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:22:24.452289 1128583 system_pods.go:59] 8 kube-system pods found
	I0318 14:22:24.452326 1128583 system_pods.go:61] "coredns-76f75df574-cksb5" [9cd14e15-7b0f-4978-b667-cba1a54db074] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:22:24.452333 1128583 system_pods.go:61] "etcd-no-preload-188109" [fa7d3ae7-2ac1-4275-8739-686c2e3b7569] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:22:24.452345 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [135ee544-ca83-41ab-9cb2-070587eb3b77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:22:24.452351 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [fd91846b-6210-4cab-ae0f-5e942b4f596e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:22:24.452361 1128583 system_pods.go:61] "kube-proxy-k5kcr" [a1649d3a-9063-49c3-a8a5-04879eee108b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:22:24.452367 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [5bbb4165-ca8f-4807-ad01-bb35c56b6aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:22:24.452375 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-6pn6n" [004af8d8-fa8c-475c-9604-ed98ccceb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:22:24.452390 1128583 system_pods.go:61] "storage-provisioner" [45cae6ca-e3ad-4f7e-9d10-96e091160e4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:22:24.452404 1128583 system_pods.go:74] duration metric: took 25.960889ms to wait for pod list to return data ...
	I0318 14:22:24.452417 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:22:24.456337 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:22:24.456367 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:22:24.456404 1128583 node_conditions.go:105] duration metric: took 3.980296ms to run NodePressure ...
	I0318 14:22:24.456424 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:24.738808 1128583 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743864 1128583 kubeadm.go:733] kubelet initialised
	I0318 14:22:24.743893 1128583 kubeadm.go:734] duration metric: took 5.054661ms waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743905 1128583 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:22:24.749832 1128583 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.700193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.198261 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:25.810553 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:27.811576 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.310813 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.757033 1128583 pod_ready.go:102] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:28.757522 1128583 pod_ready.go:92] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:28.757562 1128583 pod_ready.go:81] duration metric: took 4.007696709s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:28.757576 1128583 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:30.767877 1128583 pod_ready.go:102] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.199688 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.199994 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:32.311356 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.311807 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.265717 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:31.265745 1128583 pod_ready.go:81] duration metric: took 2.508162139s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:31.265755 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:33.273718 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:35.275477 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.200137 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.698589 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:36.812617 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.312289 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:37.774164 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.273935 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.273990 1128583 pod_ready.go:81] duration metric: took 8.008204942s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.274005 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280284 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.280313 1128583 pod_ready.go:81] duration metric: took 6.300519ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280324 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286027 1128583 pod_ready.go:92] pod "kube-proxy-k5kcr" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.286052 1128583 pod_ready.go:81] duration metric: took 5.721757ms for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286061 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292404 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.292450 1128583 pod_ready.go:81] duration metric: took 6.381121ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292462 1128583 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.699760 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.198691 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.199259 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.812494 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:44.312890 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.300167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:43.803022 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.199771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:45.698884 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.811124 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.827580 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.300768 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.300891 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.800442 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:47.699312 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.199049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:51.312163 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.812711 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:52.800573 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:54.801034 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:52.698863 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:55.200175 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.311249 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:58.312362 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:57.300207 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.300788 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:57.698375 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.706517 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:00.814865 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:03.312810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:01.800156 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.302577 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:02.198461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.199002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.199307 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:05.811934 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:08.312176 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:10.312721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.800938 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:09.302833 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:08.699862 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.199072 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.315288 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.813053 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.800023 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.300632 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:13.199539 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:15.700968 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:17.311266 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:19.311540 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:16.800323 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.801497 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:18.198887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:20.698596 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.312215 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.810513 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.299039 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.300143 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.798699 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.699705 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.200662 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.811810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.813013 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.311457 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.800554 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.304272 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:27.700002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.200376 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.311860 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.313084 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.802042 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:35.299530 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:32.212811 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.700061 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:36.811811 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.312768 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.300434 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.300586 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.199672 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.698826 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.810759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.812721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.800736 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:41.707557 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.199509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.311703 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.811077 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.301804 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.800280 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.801802 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.698786 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:49.198083 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:51.198509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.812029 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.311681 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.300917 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.301653 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.699341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.699481 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.810495 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.810917 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:59.812265 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.800712 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.300089 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:58.198656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.699394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.311601 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.311669 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.300584 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.300628 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:03.198564 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:05.198935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.811819 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.812462 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.300681 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.300912 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.799297 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:07.699433 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.198062 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.311072 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:13.311533 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:15.313064 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:12.799619 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.800637 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:12.201234 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.698988 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.811663 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.812068 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:16.801294 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.301959 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.199048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.200040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:22.312401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.812678 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.799684 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.301211 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.699674 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.199382 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.312672 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.315087 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:26.800594 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.299763 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:26.698769 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:28.700212 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.200571 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.811482 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.812980 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.299818 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.300656 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.798808 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:33.698578 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.698656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.814759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:38.311080 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:40.312503 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:37.800024 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.801292 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.698736 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.699014 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.813028 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.814259 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.300121 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.802581 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:42.198235 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.199088 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.312598 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.814542 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.300765 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.801469 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.699746 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.198789 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:51.199313 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.311753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.311952 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.301766 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.199935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:55.699461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.811981 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.814200 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.799464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.800623 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.198049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:00.199613 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.312698 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.811687 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.299462 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.300239 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:05.301621 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.200341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:04.698040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:06.312239 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.811427 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:07.800597 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:10.300964 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:06.700315 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:09.198241 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.198296 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.312458 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:13.312602 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:12.799474 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:14.800216 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:13.199314 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.199666 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.812120 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.813219 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.814195 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:16.800432 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.299589 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.698783 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.698980 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.804733 1128788 pod_ready.go:81] duration metric: took 4m0.000568505s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:21.804764 1128788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:21.804783 1128788 pod_ready.go:38] duration metric: took 4m13.068724908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:21.804834 1128788 kubeadm.go:591] duration metric: took 4m21.284795634s to restartPrimaryControlPlane
	W0318 14:25:21.804919 1128788 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:21.804954 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:21.300889 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:23.800547 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:25.803188 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:21.699436 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:24.199193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:28.300495 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:30.300870 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:26.700384 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:29.203012 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:32.800506 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:35.299464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:31.698372 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.699450 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.199443 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:37.299695 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:39.800741 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:38.199879 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:40.698620 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:41.801098 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:44.301379 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:41.692077 1128964 pod_ready.go:81] duration metric: took 4m0.00104s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:41.692109 1128964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:41.692136 1128964 pod_ready.go:38] duration metric: took 4m13.711186182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:41.692170 1128964 kubeadm.go:591] duration metric: took 4m21.341445822s to restartPrimaryControlPlane
	W0318 14:25:41.692279 1128964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:41.692345 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:46.800687 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:49.300012 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:54.173048 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.368065771s)
	I0318 14:25:54.173145 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:54.192139 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:54.204909 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:54.217096 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:54.217126 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:54.217182 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:54.227905 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:54.228009 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:54.239854 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:54.250668 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:54.250744 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:54.263509 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.274202 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:54.274265 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.285342 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:54.296064 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:54.296157 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:54.307985 1128788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:54.371118 1128788 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:25:54.371202 1128788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:54.551187 1128788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:54.551377 1128788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:54.551551 1128788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:54.780034 1128788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:54.782426 1128788 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:54.782545 1128788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:54.782650 1128788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:54.782735 1128788 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:54.782829 1128788 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:54.782930 1128788 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:54.783213 1128788 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:54.783717 1128788 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:54.784390 1128788 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:54.784849 1128788 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:54.785263 1128788 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:54.785725 1128788 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:54.785826 1128788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:55.130998 1128788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:55.387076 1128788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:55.517240 1128788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:51.300209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:53.303010 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.800703 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.906565 1128788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:55.907198 1128788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:55.909674 1128788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:55.912196 1128788 out.go:204]   - Booting up control plane ...
	I0318 14:25:55.912323 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:55.912407 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:55.912494 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:55.932596 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:55.935171 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:55.935520 1128788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:56.083395 1128788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:58.300288 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:00.800291 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:02.086878 1128788 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002842 seconds
	I0318 14:26:02.087052 1128788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:02.102499 1128788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:02.637889 1128788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:02.638152 1128788 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-767719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:03.157386 1128788 kubeadm.go:309] [bootstrap-token] Using token: do2whq.efhsaljmpmqgv9gj
	I0318 14:26:03.159248 1128788 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:03.159429 1128788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:03.167328 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:03.180628 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:03.185253 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:03.190014 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:03.202714 1128788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:03.223282 1128788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:03.504303 1128788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:03.614837 1128788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:03.614872 1128788 kubeadm.go:309] 
	I0318 14:26:03.614978 1128788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:03.615004 1128788 kubeadm.go:309] 
	I0318 14:26:03.615107 1128788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:03.615117 1128788 kubeadm.go:309] 
	I0318 14:26:03.615149 1128788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:03.615219 1128788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:03.615285 1128788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:03.615293 1128788 kubeadm.go:309] 
	I0318 14:26:03.615354 1128788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:03.615365 1128788 kubeadm.go:309] 
	I0318 14:26:03.615421 1128788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:03.615430 1128788 kubeadm.go:309] 
	I0318 14:26:03.615486 1128788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:03.615578 1128788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:03.615669 1128788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:03.615679 1128788 kubeadm.go:309] 
	I0318 14:26:03.615778 1128788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:03.615887 1128788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:03.615897 1128788 kubeadm.go:309] 
	I0318 14:26:03.615998 1128788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616120 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:03.616149 1128788 kubeadm.go:309] 	--control-plane 
	I0318 14:26:03.616159 1128788 kubeadm.go:309] 
	I0318 14:26:03.616266 1128788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:03.616276 1128788 kubeadm.go:309] 
	I0318 14:26:03.616371 1128788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616500 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:03.617330 1128788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:03.617374 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:26:03.617384 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:03.619394 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:03.620836 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:03.665582 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:03.812834 1128788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:03.812897 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:03.812943 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-767719 minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=embed-certs-767719 minikube.k8s.io/primary=true
	I0318 14:26:03.899419 1128788 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:04.104407 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:04.604499 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.104532 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.605047 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:02.800707 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:04.802167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:06.105187 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:06.604462 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.104411 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.605096 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.104448 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.604430 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.104707 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.605130 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.104955 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.605165 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.300575 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:09.798776 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:11.104436 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.605273 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.104851 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.604819 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.104669 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.605089 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.105486 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.604568 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.104455 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.604422 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.799935 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:13.800907 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:15.801754 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:16.105107 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:16.205506 1128788 kubeadm.go:1107] duration metric: took 12.39266353s to wait for elevateKubeSystemPrivileges
	W0318 14:26:16.205558 1128788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:16.205570 1128788 kubeadm.go:393] duration metric: took 5m15.738081871s to StartCluster
	I0318 14:26:16.205599 1128788 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.205720 1128788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:16.208645 1128788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.209157 1128788 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:16.210915 1128788 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:16.209206 1128788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:16.209401 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:16.212258 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:16.212275 1128788 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767719"
	I0318 14:26:16.212351 1128788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767719"
	I0318 14:26:16.212260 1128788 addons.go:69] Setting metrics-server=true in profile "embed-certs-767719"
	I0318 14:26:16.212415 1128788 addons.go:234] Setting addon metrics-server=true in "embed-certs-767719"
	W0318 14:26:16.212431 1128788 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:16.212469 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212260 1128788 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767719"
	I0318 14:26:16.212512 1128788 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767719"
	W0318 14:26:16.212527 1128788 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:16.212560 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.212983 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213003 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213028 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.213040 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.231532 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0318 14:26:16.231543 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0318 14:26:16.232128 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0318 14:26:16.232280 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232284 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232882 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.232907 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.232922 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.233258 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233284 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.233360 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.233479 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233501 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.235956 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236151 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236372 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236411 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.236545 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236568 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.240163 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.244336 1128788 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767719"
	W0318 14:26:16.244370 1128788 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:16.244407 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.244845 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.244894 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.257940 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0318 14:26:16.258701 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.259359 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.259386 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.259769 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.260030 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.262272 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.262286 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0318 14:26:16.264459 1128788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:16.262834 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.265430 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0318 14:26:16.266198 1128788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.266220 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:16.266240 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.266482 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.266663 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.266676 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267253 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.267277 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267753 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.268456 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.268605 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.269068 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.269098 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.269804 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270398 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.270420 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270711 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.270989 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.271183 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.271362 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.271984 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.273854 1128788 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:14.305258 1128964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.612890386s)
	I0318 14:26:14.305324 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:14.325572 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:26:14.337875 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:26:14.350490 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:26:14.350530 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:26:14.350592 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:26:14.361521 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:26:14.361612 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:26:14.372767 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:26:14.383545 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:26:14.383614 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:26:14.394057 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.404187 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:26:14.404261 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.415029 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:26:14.425738 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:26:14.425820 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:26:14.436847 1128964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:26:14.674909 1128964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:16.275278 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:16.275298 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:16.275323 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.278500 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.278909 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.278939 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.279230 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.279437 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.279612 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.279748 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.286716 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0318 14:26:16.287176 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.287651 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.287678 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.288057 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.288248 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.290084 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.290359 1128788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.290381 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:16.290404 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.293253 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293662 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.293688 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293886 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.294078 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.294241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.294398 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.460832 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:16.537089 1128788 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550362 1128788 node_ready.go:49] node "embed-certs-767719" has status "Ready":"True"
	I0318 14:26:16.550391 1128788 node_ready.go:38] duration metric: took 13.195546ms for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550405 1128788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:16.557745 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:16.638531 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:16.638565 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:16.664638 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.762661 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:16.762713 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:16.792712 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.859169 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:16.859200 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:16.954827 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:18.103559 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.103592 1128788 pod_ready.go:81] duration metric: took 1.545818643s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.103606 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.256039 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591350359s)
	I0318 14:26:18.256112 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256129 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256483 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256513 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256530 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256528 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.256541 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256918 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256936 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256950 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.264761 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.264788 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.265133 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.265164 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.265193 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.652953 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.653088 1128788 pod_ready.go:81] duration metric: took 549.466665ms for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.653124 1128788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674506 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.674553 1128788 pod_ready.go:81] duration metric: took 21.386005ms for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674568 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.680422 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.887663901s)
	I0318 14:26:18.680486 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680498 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.680875 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.680887 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.680903 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.680921 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680928 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.681198 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.681199 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.681277 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.711919 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.711954 1128788 pod_ready.go:81] duration metric: took 37.376915ms for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.711968 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730096 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.730129 1128788 pod_ready.go:81] duration metric: took 18.151839ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730145 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.756000 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.801120989s)
	I0318 14:26:18.756076 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756091 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756416 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756435 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756445 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756452 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756849 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756883 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756895 1128788 addons.go:470] Verifying addon metrics-server=true in "embed-certs-767719"
	I0318 14:26:18.756917 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.759019 1128788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 14:26:18.760442 1128788 addons.go:505] duration metric: took 2.551236037s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 14:26:18.942164 1128788 pod_ready.go:92] pod "kube-proxy-f4547" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.942196 1128788 pod_ready.go:81] duration metric: took 212.040337ms for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.942205 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341772 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:19.341808 1128788 pod_ready.go:81] duration metric: took 399.594033ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341820 1128788 pod_ready.go:38] duration metric: took 2.791403027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:19.341841 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:19.341921 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:19.362110 1128788 api_server.go:72] duration metric: took 3.152894755s to wait for apiserver process to appear ...
	I0318 14:26:19.362150 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:19.362209 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:26:19.368138 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:26:19.369583 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:19.369608 1128788 api_server.go:131] duration metric: took 7.450993ms to wait for apiserver health ...
	I0318 14:26:19.369617 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:19.545388 1128788 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:19.545423 1128788 system_pods.go:61] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.545428 1128788 system_pods.go:61] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.545431 1128788 system_pods.go:61] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.545434 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.545438 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.545441 1128788 system_pods.go:61] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.545443 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.545449 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.545455 1128788 system_pods.go:61] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.545464 1128788 system_pods.go:74] duration metric: took 175.840386ms to wait for pod list to return data ...
	I0318 14:26:19.545473 1128788 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:19.741364 1128788 default_sa.go:45] found service account: "default"
	I0318 14:26:19.741405 1128788 default_sa.go:55] duration metric: took 195.920075ms for default service account to be created ...
	I0318 14:26:19.741424 1128788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:19.945000 1128788 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:19.945039 1128788 system_pods.go:89] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.945047 1128788 system_pods.go:89] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.945053 1128788 system_pods.go:89] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.945060 1128788 system_pods.go:89] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.945066 1128788 system_pods.go:89] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.945070 1128788 system_pods.go:89] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.945076 1128788 system_pods.go:89] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.945087 1128788 system_pods.go:89] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.945097 1128788 system_pods.go:89] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.945110 1128788 system_pods.go:126] duration metric: took 203.67742ms to wait for k8s-apps to be running ...
	I0318 14:26:19.945122 1128788 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:19.945188 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:19.987286 1128788 system_svc.go:56] duration metric: took 42.149434ms WaitForService to wait for kubelet
	I0318 14:26:19.987328 1128788 kubeadm.go:576] duration metric: took 3.778120092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:19.987361 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:20.141763 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:20.141803 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:20.141822 1128788 node_conditions.go:105] duration metric: took 154.45408ms to run NodePressure ...
	I0318 14:26:20.141840 1128788 start.go:240] waiting for startup goroutines ...
	I0318 14:26:20.141851 1128788 start.go:245] waiting for cluster config update ...
	I0318 14:26:20.141867 1128788 start.go:254] writing updated cluster config ...
	I0318 14:26:20.142268 1128788 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:20.206832 1128788 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:20.209057 1128788 out.go:177] * Done! kubectl is now configured to use "embed-certs-767719" cluster and "default" namespace by default
	I0318 14:26:18.302228 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:20.799704 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.444912 1128964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:26:23.444993 1128964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:26:23.445098 1128964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:26:23.445212 1128964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:26:23.445359 1128964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:26:23.445461 1128964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:26:23.446790 1128964 out.go:204]   - Generating certificates and keys ...
	I0318 14:26:23.446904 1128964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:26:23.446986 1128964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:26:23.447102 1128964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:26:23.447194 1128964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:26:23.447309 1128964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:26:23.447376 1128964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:26:23.447453 1128964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:26:23.447529 1128964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:26:23.447607 1128964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:26:23.447693 1128964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:26:23.447741 1128964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:26:23.447856 1128964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:26:23.447937 1128964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:26:23.448019 1128964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:26:23.448121 1128964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:26:23.448194 1128964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:26:23.448311 1128964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:26:23.448422 1128964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:26:23.450038 1128964 out.go:204]   - Booting up control plane ...
	I0318 14:26:23.450174 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:26:23.450282 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:26:23.450371 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:26:23.450509 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:26:23.450633 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:26:23.450671 1128964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:26:23.450818 1128964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:23.450887 1128964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.005932 seconds
	I0318 14:26:23.450974 1128964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:23.451093 1128964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:23.451143 1128964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:23.451340 1128964 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-075922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:23.451414 1128964 kubeadm.go:309] [bootstrap-token] Using token: k51w96.h8xduusjdfbez3gf
	I0318 14:26:23.452848 1128964 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:23.452964 1128964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:23.453073 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:23.453269 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:23.453499 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:23.453664 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:23.453785 1128964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:23.453940 1128964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:23.454005 1128964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:23.454074 1128964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:23.454084 1128964 kubeadm.go:309] 
	I0318 14:26:23.454172 1128964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:23.454186 1128964 kubeadm.go:309] 
	I0318 14:26:23.454288 1128964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:23.454298 1128964 kubeadm.go:309] 
	I0318 14:26:23.454335 1128964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:23.454412 1128964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:23.454475 1128964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:23.454484 1128964 kubeadm.go:309] 
	I0318 14:26:23.454528 1128964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:23.454538 1128964 kubeadm.go:309] 
	I0318 14:26:23.454592 1128964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:23.454599 1128964 kubeadm.go:309] 
	I0318 14:26:23.454681 1128964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:23.454804 1128964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:23.454907 1128964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:23.454919 1128964 kubeadm.go:309] 
	I0318 14:26:23.455027 1128964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:23.455146 1128964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:23.455157 1128964 kubeadm.go:309] 
	I0318 14:26:23.455264 1128964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455401 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:23.455433 1128964 kubeadm.go:309] 	--control-plane 
	I0318 14:26:23.455441 1128964 kubeadm.go:309] 
	I0318 14:26:23.455551 1128964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:23.455560 1128964 kubeadm.go:309] 
	I0318 14:26:23.455666 1128964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455814 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:23.455838 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:26:23.455849 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:23.457678 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:22.801209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:25.305096 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.459285 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:23.475803 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:23.515652 1128964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-075922 minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=default-k8s-diff-port-075922 minikube.k8s.io/primary=true
	I0318 14:26:23.796828 1128964 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:23.796947 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.296970 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.797728 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.297564 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.797144 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:26.297056 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.800960 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:29.802967 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:26.798004 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.297935 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.797550 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.297031 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.797624 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.297549 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.797256 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.297964 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.797927 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:31.297742 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.300787 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:34.800941 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:31.797040 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.297155 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.797371 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.297809 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.797723 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.297045 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.797008 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.297030 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.797767 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.895914 1128964 kubeadm.go:1107] duration metric: took 12.380212538s to wait for elevateKubeSystemPrivileges
	W0318 14:26:35.895975 1128964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:35.895987 1128964 kubeadm.go:393] duration metric: took 5m15.606276512s to StartCluster
	I0318 14:26:35.896013 1128964 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.896123 1128964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:35.898023 1128964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.898324 1128964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:35.900235 1128964 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:35.898415 1128964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:35.898550 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:35.901588 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:35.901599 1128964 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901617 1128964 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901640 1128964 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901650 1128964 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:35.901665 1128964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-075922"
	I0318 14:26:35.901588 1128964 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901698 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.901723 1128964 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901735 1128964 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:35.901764 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.902055 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902088 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902097 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902126 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902130 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902169 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.919538 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0318 14:26:35.920140 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.920836 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.920864 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.921282 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.921940 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.921983 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.923313 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0318 14:26:35.923321 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0318 14:26:35.923742 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.923792 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.924263 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924280 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924381 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924395 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924710 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924733 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924893 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.925215 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.925235 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.928021 1128964 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.928047 1128964 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:35.928081 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.928422 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.928449 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.941908 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0318 14:26:35.942465 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.943114 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.943146 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.943757 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.943991 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.944493 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 14:26:35.944874 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.945387 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.945404 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.945865 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.945988 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.948302 1128964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:35.946821 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.947744 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 14:26:35.950087 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:35.950110 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:35.950135 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.950181 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.950664 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.951258 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.951295 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.951755 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.952146 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.953842 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954331 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.954353 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954360 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.954563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.956253 1128964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:35.954739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:35.956487 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.957743 1128964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:35.957764 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:35.957783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.957864 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.960451 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.960896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.960929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.961107 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.961281 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.961435 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.961565 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.968795 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0318 14:26:35.969191 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.969631 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.969646 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.969955 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.970117 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.971799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.972169 1128964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:35.972188 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:35.972206 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.974906 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975268 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.975301 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975551 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.975767 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.975958 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.976137 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:36.122420 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:36.139655 1128964 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160857 1128964 node_ready.go:49] node "default-k8s-diff-port-075922" has status "Ready":"True"
	I0318 14:26:36.160883 1128964 node_ready.go:38] duration metric: took 21.193343ms for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160893 1128964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:36.176832 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:36.240357 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:36.240385 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:36.261620 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:36.279644 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:36.294510 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:36.294546 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:36.374231 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:36.376166 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:36.419045 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:38.032072 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752379015s)
	I0318 14:26:38.032148 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032161 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032374 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770714521s)
	I0318 14:26:38.032416 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032427 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032623 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032652 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032660 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032683 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032796 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032814 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032817 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032835 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032848 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.033046 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033107 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033173 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.033149 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033259 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033284 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.112866 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.112896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.113337 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.113362 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.113384 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176199 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.757085355s)
	I0318 14:26:38.176281 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176302 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176669 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176683 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176697 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176707 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176716 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176955 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176969 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176980 1128964 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-075922"
	I0318 14:26:38.178714 1128964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:26:37.300219 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:39.293136 1128583 pod_ready.go:81] duration metric: took 4m0.000606722s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	E0318 14:26:39.293173 1128583 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:26:39.293203 1128583 pod_ready.go:38] duration metric: took 4m14.549283732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:39.293239 1128583 kubeadm.go:591] duration metric: took 4m22.862167815s to restartPrimaryControlPlane
	W0318 14:26:39.293320 1128583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:26:39.293362 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:38.180451 1128964 addons.go:505] duration metric: took 2.282033093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:26:38.194239 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:40.186091 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.186125 1128964 pod_ready.go:81] duration metric: took 4.009253844s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.186139 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193026 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.193059 1128964 pod_ready.go:81] duration metric: took 6.912513ms for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193069 1128964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199244 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.199272 1128964 pod_ready.go:81] duration metric: took 6.195834ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199283 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.204991 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.205019 1128964 pod_ready.go:81] duration metric: took 5.728459ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.205034 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214706 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.214730 1128964 pod_ready.go:81] duration metric: took 9.687528ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214739 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.581970 1128964 pod_ready.go:92] pod "kube-proxy-bzwvf" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.582045 1128964 pod_ready.go:81] duration metric: took 367.297496ms for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.582059 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981562 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.981592 1128964 pod_ready.go:81] duration metric: took 399.525488ms for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981601 1128964 pod_ready.go:38] duration metric: took 4.820697544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:40.981618 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:40.981676 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:40.998626 1128964 api_server.go:72] duration metric: took 5.100242538s to wait for apiserver process to appear ...
	I0318 14:26:40.998672 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:40.998703 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:26:41.010986 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:26:41.012714 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:41.012742 1128964 api_server.go:131] duration metric: took 14.061953ms to wait for apiserver health ...
	I0318 14:26:41.012750 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:41.186873 1128964 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:41.186910 1128964 system_pods.go:61] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.186917 1128964 system_pods.go:61] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.186922 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.186935 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.186943 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.186948 1128964 system_pods.go:61] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.186953 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.187013 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.187029 1128964 system_pods.go:61] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.187041 1128964 system_pods.go:74] duration metric: took 174.283401ms to wait for pod list to return data ...
	I0318 14:26:41.187054 1128964 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:41.381195 1128964 default_sa.go:45] found service account: "default"
	I0318 14:26:41.381238 1128964 default_sa.go:55] duration metric: took 194.17219ms for default service account to be created ...
	I0318 14:26:41.381252 1128964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:41.584896 1128964 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:41.584934 1128964 system_pods.go:89] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.584940 1128964 system_pods.go:89] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.584945 1128964 system_pods.go:89] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.584952 1128964 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.584957 1128964 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.584961 1128964 system_pods.go:89] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.584965 1128964 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.584974 1128964 system_pods.go:89] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.584980 1128964 system_pods.go:89] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.584996 1128964 system_pods.go:126] duration metric: took 203.730421ms to wait for k8s-apps to be running ...
	I0318 14:26:41.585011 1128964 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:41.585065 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:41.602211 1128964 system_svc.go:56] duration metric: took 17.185915ms WaitForService to wait for kubelet
	I0318 14:26:41.602253 1128964 kubeadm.go:576] duration metric: took 5.703881545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:41.602283 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:41.781292 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:41.781321 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:41.781333 1128964 node_conditions.go:105] duration metric: took 179.044515ms to run NodePressure ...
	I0318 14:26:41.781345 1128964 start.go:240] waiting for startup goroutines ...
	I0318 14:26:41.781352 1128964 start.go:245] waiting for cluster config update ...
	I0318 14:26:41.781363 1128964 start.go:254] writing updated cluster config ...
	I0318 14:26:41.781670 1128964 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:41.845950 1128964 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:41.848522 1128964 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-075922" cluster and "default" namespace by default
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:11.668940 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.375539998s)
	I0318 14:27:11.669036 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:11.687767 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:27:11.699135 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:11.710896 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:11.710924 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:11.710971 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:11.721562 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:11.721638 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:11.733335 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:11.744643 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:11.744724 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:11.755801 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.766424 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:11.766515 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.777734 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:11.788887 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:11.788972 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:11.800792 1128583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:11.858933 1128583 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:27:11.859030 1128583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:27:12.029485 1128583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:27:12.029703 1128583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:27:12.029833 1128583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:27:12.279174 1128583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:27:12.281285 1128583 out.go:204]   - Generating certificates and keys ...
	I0318 14:27:12.281400 1128583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:27:12.281507 1128583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:27:12.281633 1128583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:27:12.281726 1128583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:27:12.281844 1128583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:27:12.281938 1128583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:27:12.282031 1128583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:27:12.282121 1128583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:27:12.282218 1128583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:27:12.282325 1128583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:27:12.282392 1128583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:27:12.282470 1128583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:27:12.605106 1128583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:27:12.950706 1128583 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:27:13.067948 1128583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:27:13.340677 1128583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:27:13.393147 1128583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:27:13.393891 1128583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:27:13.396474 1128583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:27:13.398563 1128583 out.go:204]   - Booting up control plane ...
	I0318 14:27:13.398698 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:27:13.398814 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:27:13.398900 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:27:13.422155 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:27:13.423529 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:27:13.423626 1128583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:27:13.568295 1128583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:27:19.571958 1128583 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003509 seconds
	I0318 14:27:19.587644 1128583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:27:19.607417 1128583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:27:20.153253 1128583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:27:20.153526 1128583 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-188109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:27:20.671613 1128583 kubeadm.go:309] [bootstrap-token] Using token: oq5d1l.24j9td8ex727h998
	I0318 14:27:20.673250 1128583 out.go:204]   - Configuring RBAC rules ...
	I0318 14:27:20.673402 1128583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:27:20.680765 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:27:20.693884 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:27:20.698696 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:27:20.702572 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:27:20.710027 1128583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:27:20.725068 1128583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:27:20.981178 1128583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:27:21.104335 1128583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:27:21.107428 1128583 kubeadm.go:309] 
	I0318 14:27:21.107550 1128583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:27:21.107596 1128583 kubeadm.go:309] 
	I0318 14:27:21.107725 1128583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:27:21.107750 1128583 kubeadm.go:309] 
	I0318 14:27:21.107796 1128583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:27:21.107894 1128583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:27:21.107995 1128583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:27:21.108030 1128583 kubeadm.go:309] 
	I0318 14:27:21.108127 1128583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:27:21.108145 1128583 kubeadm.go:309] 
	I0318 14:27:21.108228 1128583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:27:21.108242 1128583 kubeadm.go:309] 
	I0318 14:27:21.108318 1128583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:27:21.108400 1128583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:27:21.108487 1128583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:27:21.108503 1128583 kubeadm.go:309] 
	I0318 14:27:21.108628 1128583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:27:21.108730 1128583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:27:21.108741 1128583 kubeadm.go:309] 
	I0318 14:27:21.108839 1128583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.108968 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:27:21.109031 1128583 kubeadm.go:309] 	--control-plane 
	I0318 14:27:21.109054 1128583 kubeadm.go:309] 
	I0318 14:27:21.109176 1128583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:27:21.109195 1128583 kubeadm.go:309] 
	I0318 14:27:21.109298 1128583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.109455 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:27:21.114992 1128583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:21.115128 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:27:21.115151 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:27:21.116940 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:27:21.118320 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:27:21.167945 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:27:21.256429 1128583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-188109 minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=no-preload-188109 minikube.k8s.io/primary=true
	I0318 14:27:21.315419 1128583 ops.go:34] apiserver oom_adj: -16
	I0318 14:27:21.530472 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.030814 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.531214 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.030869 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.530677 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.031137 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.531400 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.031455 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.530648 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.031501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.531399 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.031109 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.531261 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.030757 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.531295 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.030505 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.531501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.030996 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.530490 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.030520 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.531340 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.031217 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.531425 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.031231 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.531300 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.678904 1128583 kubeadm.go:1107] duration metric: took 12.422463336s to wait for elevateKubeSystemPrivileges
	W0318 14:27:33.678959 1128583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:27:33.678972 1128583 kubeadm.go:393] duration metric: took 5m17.305262011s to StartCluster
	I0318 14:27:33.678999 1128583 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.679119 1128583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:27:33.681595 1128583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.681893 1128583 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:27:33.683724 1128583 out.go:177] * Verifying Kubernetes components...
	I0318 14:27:33.682059 1128583 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:27:33.682122 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:27:33.685123 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:27:33.685131 1128583 addons.go:69] Setting default-storageclass=true in profile "no-preload-188109"
	I0318 14:27:33.685135 1128583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188109"
	I0318 14:27:33.685139 1128583 addons.go:69] Setting metrics-server=true in profile "no-preload-188109"
	I0318 14:27:33.685165 1128583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188109"
	I0318 14:27:33.685173 1128583 addons.go:234] Setting addon metrics-server=true in "no-preload-188109"
	I0318 14:27:33.685175 1128583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-188109"
	W0318 14:27:33.685182 1128583 addons.go:243] addon metrics-server should already be in state true
	W0318 14:27:33.685185 1128583 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:27:33.685231 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685238 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685573 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685575 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685613 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685617 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685629 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685637 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.703022 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0318 14:27:33.703262 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 14:27:33.703844 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704181 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704628 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704649 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.704715 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704736 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.705213 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705374 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705809 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705863 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.705911 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705987 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.706076 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0318 14:27:33.706558 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.707198 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.707222 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.707699 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.708354 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.712289 1128583 addons.go:234] Setting addon default-storageclass=true in "no-preload-188109"
	W0318 14:27:33.712323 1128583 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:27:33.712364 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.712795 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.712833 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.724381 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0318 14:27:33.724980 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.725587 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.725614 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.726054 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.726363 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.727777 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0318 14:27:33.728182 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.728497 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.730538 1128583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:27:33.729152 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.730851 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0318 14:27:33.732037 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:27:33.732055 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:27:33.732076 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.732113 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.732489 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.732593 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.732881 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.732979 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.732991 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.733604 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.734297 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.734329 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.735399 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.737266 1128583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:27:33.735988 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.736830 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.739081 1128583 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:33.739098 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:27:33.737327 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.739122 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.739142 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.740009 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.740263 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.740482 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.742702 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743181 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.743211 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743473 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.743706 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.743902 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.744097 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.752903 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0318 14:27:33.756275 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.756901 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.756932 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.757363 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.757603 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.759471 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.759732 1128583 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:33.759751 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:27:33.759772 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.762687 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763139 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.763162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763414 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.763599 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.763765 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.763919 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.942490 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:27:33.975796 1128583 node_ready.go:35] waiting up to 6m0s for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008100 1128583 node_ready.go:49] node "no-preload-188109" has status "Ready":"True"
	I0318 14:27:34.008135 1128583 node_ready.go:38] duration metric: took 32.281068ms for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008149 1128583 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:34.039370 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:34.067765 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:27:34.067798 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:27:34.088294 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:34.091931 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:34.121689 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:27:34.121722 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:27:34.183609 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:34.183638 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:27:34.264906 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:35.590900 1128583 pod_ready.go:92] pod "coredns-76f75df574-jk9v5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.590928 1128583 pod_ready.go:81] duration metric: took 1.551526097s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.590938 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605647 1128583 pod_ready.go:92] pod "coredns-76f75df574-xczpc" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.605675 1128583 pod_ready.go:81] duration metric: took 14.730232ms for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605685 1128583 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.613213 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.521243904s)
	I0318 14:27:35.613276 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613289 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613282 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.524948587s)
	I0318 14:27:35.613324 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.613811 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613813 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.613824 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613831 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614119 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614166 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614183 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.614191 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614192 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.614234 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614273 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614502 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614517 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.636576 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.636610 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.636920 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.636946 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.656945 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.656972 1128583 pod_ready.go:81] duration metric: took 51.280554ms for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.656983 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683260 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.683291 1128583 pod_ready.go:81] duration metric: took 26.301625ms for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683301 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.691855 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42688194s)
	I0318 14:27:35.691918 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.691934 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692300 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692325 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692336 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.692344 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692661 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.692701 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692709 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692721 1128583 addons.go:470] Verifying addon metrics-server=true in "no-preload-188109"
	I0318 14:27:35.694758 1128583 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:27:35.696004 1128583 addons.go:505] duration metric: took 2.013954954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:27:35.709010 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.709035 1128583 pod_ready.go:81] duration metric: took 25.726967ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.709045 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982032 1128583 pod_ready.go:92] pod "kube-proxy-qpxx5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.982080 1128583 pod_ready.go:81] duration metric: took 273.026354ms for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982094 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380184 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:36.380228 1128583 pod_ready.go:81] duration metric: took 398.123566ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380241 1128583 pod_ready.go:38] duration metric: took 2.372078145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:36.380264 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:27:36.380334 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:27:36.401316 1128583 api_server.go:72] duration metric: took 2.719374991s to wait for apiserver process to appear ...
	I0318 14:27:36.401358 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:27:36.401389 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:27:36.407212 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:27:36.408930 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:27:36.408966 1128583 api_server.go:131] duration metric: took 7.597771ms to wait for apiserver health ...
	I0318 14:27:36.408989 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:27:36.583053 1128583 system_pods.go:59] 9 kube-system pods found
	I0318 14:27:36.583099 1128583 system_pods.go:61] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.583107 1128583 system_pods.go:61] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.583112 1128583 system_pods.go:61] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.583116 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.583120 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.583123 1128583 system_pods.go:61] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.583127 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.583134 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.583138 1128583 system_pods.go:61] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.583147 1128583 system_pods.go:74] duration metric: took 174.139423ms to wait for pod list to return data ...
	I0318 14:27:36.583156 1128583 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:27:36.779733 1128583 default_sa.go:45] found service account: "default"
	I0318 14:27:36.779771 1128583 default_sa.go:55] duration metric: took 196.607194ms for default service account to be created ...
	I0318 14:27:36.779783 1128583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:27:36.982750 1128583 system_pods.go:86] 9 kube-system pods found
	I0318 14:27:36.982783 1128583 system_pods.go:89] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.982789 1128583 system_pods.go:89] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.982793 1128583 system_pods.go:89] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.982798 1128583 system_pods.go:89] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.982804 1128583 system_pods.go:89] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.982808 1128583 system_pods.go:89] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.982812 1128583 system_pods.go:89] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.982819 1128583 system_pods.go:89] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.982823 1128583 system_pods.go:89] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.982832 1128583 system_pods.go:126] duration metric: took 203.042771ms to wait for k8s-apps to be running ...
	I0318 14:27:36.982839 1128583 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:27:36.982902 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:37.000948 1128583 system_svc.go:56] duration metric: took 18.09435ms WaitForService to wait for kubelet
	I0318 14:27:37.000980 1128583 kubeadm.go:576] duration metric: took 3.319049387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:27:37.001005 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:27:37.180608 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:27:37.180639 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:27:37.180652 1128583 node_conditions.go:105] duration metric: took 179.641912ms to run NodePressure ...
	I0318 14:27:37.180665 1128583 start.go:240] waiting for startup goroutines ...
	I0318 14:27:37.180672 1128583 start.go:245] waiting for cluster config update ...
	I0318 14:27:37.180686 1128583 start.go:254] writing updated cluster config ...
	I0318 14:27:37.181004 1128583 ssh_runner.go:195] Run: rm -f paused
	I0318 14:27:37.236286 1128583 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:27:37.238455 1128583 out.go:177] * Done! kubectl is now configured to use "no-preload-188109" cluster and "default" namespace by default
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.365966983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772730365935116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37226c0c-5518-4409-a4bc-fd3de037a5b7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.366704134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59137bb2-a48f-4765-b5f8-394d623a37a1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.366763538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59137bb2-a48f-4765-b5f8-394d623a37a1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.366795610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=59137bb2-a48f-4765-b5f8-394d623a37a1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.402399013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1785c53a-7149-445e-bc13-067853388ae4 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.402508350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1785c53a-7149-445e-bc13-067853388ae4 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.403962065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1696257-9bc3-4b12-bbe1-c6c546f8b3e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.404477256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772730404433861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1696257-9bc3-4b12-bbe1-c6c546f8b3e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.405203340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=765dc64c-043b-4f68-9672-fb8c065dc2e8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.405315729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=765dc64c-043b-4f68-9672-fb8c065dc2e8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.405353184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=765dc64c-043b-4f68-9672-fb8c065dc2e8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.443377439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcffbe36-719f-4ad9-a917-fd18f604d2fd name=/runtime.v1.RuntimeService/Version
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.443457689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcffbe36-719f-4ad9-a917-fd18f604d2fd name=/runtime.v1.RuntimeService/Version
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.444661624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=224e464c-6032-499d-adc7-49ca4f0c6676 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.445065520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772730445038012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=224e464c-6032-499d-adc7-49ca4f0c6676 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.445740019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e761095-a82f-4479-907d-42beddfeec67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.445795083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e761095-a82f-4479-907d-42beddfeec67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.445832558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e761095-a82f-4479-907d-42beddfeec67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.479439127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ccd26765-7f5d-4ec9-9118-d142a9828c4e name=/runtime.v1.RuntimeService/Version
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.479535575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccd26765-7f5d-4ec9-9118-d142a9828c4e name=/runtime.v1.RuntimeService/Version
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.480919009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8699d003-d055-479c-ab3a-2e08ac9b9429 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.481395373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772730481373067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8699d003-d055-479c-ab3a-2e08ac9b9429 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.482320089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d6bd884-9ac8-47e6-ae02-c955fba15820 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.482407782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d6bd884-9ac8-47e6-ae02-c955fba15820 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:38:50 old-k8s-version-782728 crio[653]: time="2024-03-18 14:38:50.482448767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0d6bd884-9ac8-47e6-ae02-c955fba15820 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 14:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052875] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041790] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922199] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.676692] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.118921] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062985] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068114] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.236009] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.138019] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.296452] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.964415] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.070819] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.228114] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[  +9.153953] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 14:25] systemd-fstab-generator[4965]: Ignoring "noauto" option for root device
	[Mar18 14:27] systemd-fstab-generator[5242]: Ignoring "noauto" option for root device
	[  +0.076165] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:38:50 up 17 min,  0 users,  load average: 0.00, 0.06, 0.07
	Linux old-k8s-version-782728 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d8700, 0xc00054e660, 0xc00054e660, 0x0, 0x0)
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008a68c0)
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: goroutine 164 [runnable]:
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: runtime.Gosched(...)
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /usr/local/go/src/runtime/proc.go:271
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0002e5140, 0x0, 0x0)
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008a68c0)
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 18 14:38:48 old-k8s-version-782728 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 14:38:48 old-k8s-version-782728 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 14:38:48 old-k8s-version-782728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 18 14:38:48 old-k8s-version-782728 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 14:38:48 old-k8s-version-782728 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6441]: I0318 14:38:48.990600    6441 server.go:416] Version: v1.20.0
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6441]: I0318 14:38:48.991063    6441 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6441]: I0318 14:38:48.994618    6441 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6441]: W0318 14:38:48.996247    6441 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 18 14:38:48 old-k8s-version-782728 kubelet[6441]: I0318 14:38:48.996562    6441 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (263.001277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-782728" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (393.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767719 -n embed-certs-767719
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:41:56.51040994 +0000 UTC m=+7034.303869134
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-767719 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-767719 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.276µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-767719 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-767719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-767719 logs -n 25: (1.550249161s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:40 UTC | 18 Mar 24 14:40 UTC |
	| start   | -p newest-cni-997491 --memory=2200 --alsologtostderr   | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:40 UTC | 18 Mar 24 14:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC | 18 Mar 24 14:41 UTC |
	| addons  | enable metrics-server -p newest-cni-997491             | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC | 18 Mar 24 14:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-997491                                   | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:40:47
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:40:47.860233 1134500 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:40:47.860530 1134500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:40:47.860541 1134500 out.go:304] Setting ErrFile to fd 2...
	I0318 14:40:47.860548 1134500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:40:47.860766 1134500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:40:47.861449 1134500 out.go:298] Setting JSON to false
	I0318 14:40:47.862819 1134500 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":22995,"bootTime":1710749853,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:40:47.862890 1134500 start.go:139] virtualization: kvm guest
	I0318 14:40:47.865457 1134500 out.go:177] * [newest-cni-997491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:40:47.866908 1134500 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:40:47.866962 1134500 notify.go:220] Checking for updates...
	I0318 14:40:47.868625 1134500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:40:47.870263 1134500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:40:47.871798 1134500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:40:47.873186 1134500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:40:47.874811 1134500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:40:47.876948 1134500 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:40:47.877090 1134500 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:40:47.877270 1134500 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:40:47.877465 1134500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:40:47.917506 1134500 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:40:47.919187 1134500 start.go:297] selected driver: kvm2
	I0318 14:40:47.919220 1134500 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:40:47.919235 1134500 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:40:47.920155 1134500 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:40:47.920261 1134500 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:40:47.936759 1134500 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:40:47.936817 1134500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 14:40:47.936867 1134500 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 14:40:47.937113 1134500 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:40:47.937180 1134500 cni.go:84] Creating CNI manager for ""
	I0318 14:40:47.937194 1134500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:40:47.937201 1134500 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 14:40:47.937251 1134500 start.go:340] cluster config:
	{Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:40:47.937363 1134500 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:40:47.939692 1134500 out.go:177] * Starting "newest-cni-997491" primary control-plane node in "newest-cni-997491" cluster
	I0318 14:40:47.941013 1134500 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:40:47.941058 1134500 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 14:40:47.941066 1134500 cache.go:56] Caching tarball of preloaded images
	I0318 14:40:47.941187 1134500 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:40:47.941202 1134500 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 14:40:47.941306 1134500 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/config.json ...
	I0318 14:40:47.941333 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/config.json: {Name:mk53a5078ce8cd00824bc119cfc6d3c1fd475011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:40:47.941574 1134500 start.go:360] acquireMachinesLock for newest-cni-997491: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:40:47.941620 1134500 start.go:364] duration metric: took 21.846µs to acquireMachinesLock for "newest-cni-997491"
	I0318 14:40:47.941648 1134500 start.go:93] Provisioning new machine with config: &{Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:40:47.941744 1134500 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 14:40:47.943644 1134500 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 14:40:47.943860 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:40:47.943909 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:40:47.961532 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
	I0318 14:40:47.962023 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:40:47.962679 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:40:47.962704 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:40:47.963193 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:40:47.963417 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetMachineName
	I0318 14:40:47.963619 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:40:47.963800 1134500 start.go:159] libmachine.API.Create for "newest-cni-997491" (driver="kvm2")
	I0318 14:40:47.963868 1134500 client.go:168] LocalClient.Create starting
	I0318 14:40:47.963955 1134500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 14:40:47.964017 1134500 main.go:141] libmachine: Decoding PEM data...
	I0318 14:40:47.964048 1134500 main.go:141] libmachine: Parsing certificate...
	I0318 14:40:47.964141 1134500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 14:40:47.964170 1134500 main.go:141] libmachine: Decoding PEM data...
	I0318 14:40:47.964187 1134500 main.go:141] libmachine: Parsing certificate...
	I0318 14:40:47.964215 1134500 main.go:141] libmachine: Running pre-create checks...
	I0318 14:40:47.964235 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .PreCreateCheck
	I0318 14:40:47.964689 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetConfigRaw
	I0318 14:40:47.965194 1134500 main.go:141] libmachine: Creating machine...
	I0318 14:40:47.965211 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Create
	I0318 14:40:47.965374 1134500 main.go:141] libmachine: (newest-cni-997491) Creating KVM machine...
	I0318 14:40:47.966812 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found existing default KVM network
	I0318 14:40:47.968337 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:47.968143 1134524 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:56:14:d4} reservation:<nil>}
	I0318 14:40:47.969865 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:47.969769 1134524 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002de7f0}
	I0318 14:40:47.969892 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | created network xml: 
	I0318 14:40:47.969902 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | <network>
	I0318 14:40:47.969911 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   <name>mk-newest-cni-997491</name>
	I0318 14:40:47.969920 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   <dns enable='no'/>
	I0318 14:40:47.969926 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   
	I0318 14:40:47.969937 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0318 14:40:47.969948 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |     <dhcp>
	I0318 14:40:47.969960 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0318 14:40:47.969976 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |     </dhcp>
	I0318 14:40:47.969989 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   </ip>
	I0318 14:40:47.969998 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   
	I0318 14:40:47.970006 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | </network>
	I0318 14:40:47.970026 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | 
	I0318 14:40:47.975703 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | trying to create private KVM network mk-newest-cni-997491 192.168.50.0/24...
	I0318 14:40:48.053455 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | private KVM network mk-newest-cni-997491 192.168.50.0/24 created
	I0318 14:40:48.053515 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.053400 1134524 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:40:48.053529 1134500 main.go:141] libmachine: (newest-cni-997491) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491 ...
	I0318 14:40:48.053553 1134500 main.go:141] libmachine: (newest-cni-997491) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:40:48.053567 1134500 main.go:141] libmachine: (newest-cni-997491) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:40:48.325318 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.325178 1134524 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa...
	I0318 14:40:48.519429 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.519283 1134524 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/newest-cni-997491.rawdisk...
	I0318 14:40:48.519466 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Writing magic tar header
	I0318 14:40:48.519507 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Writing SSH key tar header
	I0318 14:40:48.519520 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.519456 1134524 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491 ...
	I0318 14:40:48.519650 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491
	I0318 14:40:48.519700 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491 (perms=drwx------)
	I0318 14:40:48.519717 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 14:40:48.519733 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:40:48.519743 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 14:40:48.519751 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:40:48.519759 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:40:48.519766 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home
	I0318 14:40:48.519776 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Skipping /home - not owner
	I0318 14:40:48.519811 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:40:48.519883 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 14:40:48.519900 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 14:40:48.519914 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:40:48.519932 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:40:48.519943 1134500 main.go:141] libmachine: (newest-cni-997491) Creating domain...
	I0318 14:40:48.521247 1134500 main.go:141] libmachine: (newest-cni-997491) define libvirt domain using xml: 
	I0318 14:40:48.521270 1134500 main.go:141] libmachine: (newest-cni-997491) <domain type='kvm'>
	I0318 14:40:48.521277 1134500 main.go:141] libmachine: (newest-cni-997491)   <name>newest-cni-997491</name>
	I0318 14:40:48.521286 1134500 main.go:141] libmachine: (newest-cni-997491)   <memory unit='MiB'>2200</memory>
	I0318 14:40:48.521295 1134500 main.go:141] libmachine: (newest-cni-997491)   <vcpu>2</vcpu>
	I0318 14:40:48.521302 1134500 main.go:141] libmachine: (newest-cni-997491)   <features>
	I0318 14:40:48.521313 1134500 main.go:141] libmachine: (newest-cni-997491)     <acpi/>
	I0318 14:40:48.521324 1134500 main.go:141] libmachine: (newest-cni-997491)     <apic/>
	I0318 14:40:48.521334 1134500 main.go:141] libmachine: (newest-cni-997491)     <pae/>
	I0318 14:40:48.521344 1134500 main.go:141] libmachine: (newest-cni-997491)     
	I0318 14:40:48.521353 1134500 main.go:141] libmachine: (newest-cni-997491)   </features>
	I0318 14:40:48.521366 1134500 main.go:141] libmachine: (newest-cni-997491)   <cpu mode='host-passthrough'>
	I0318 14:40:48.521401 1134500 main.go:141] libmachine: (newest-cni-997491)   
	I0318 14:40:48.521434 1134500 main.go:141] libmachine: (newest-cni-997491)   </cpu>
	I0318 14:40:48.521444 1134500 main.go:141] libmachine: (newest-cni-997491)   <os>
	I0318 14:40:48.521459 1134500 main.go:141] libmachine: (newest-cni-997491)     <type>hvm</type>
	I0318 14:40:48.521470 1134500 main.go:141] libmachine: (newest-cni-997491)     <boot dev='cdrom'/>
	I0318 14:40:48.521475 1134500 main.go:141] libmachine: (newest-cni-997491)     <boot dev='hd'/>
	I0318 14:40:48.521483 1134500 main.go:141] libmachine: (newest-cni-997491)     <bootmenu enable='no'/>
	I0318 14:40:48.521493 1134500 main.go:141] libmachine: (newest-cni-997491)   </os>
	I0318 14:40:48.521501 1134500 main.go:141] libmachine: (newest-cni-997491)   <devices>
	I0318 14:40:48.521511 1134500 main.go:141] libmachine: (newest-cni-997491)     <disk type='file' device='cdrom'>
	I0318 14:40:48.521527 1134500 main.go:141] libmachine: (newest-cni-997491)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/boot2docker.iso'/>
	I0318 14:40:48.521543 1134500 main.go:141] libmachine: (newest-cni-997491)       <target dev='hdc' bus='scsi'/>
	I0318 14:40:48.521554 1134500 main.go:141] libmachine: (newest-cni-997491)       <readonly/>
	I0318 14:40:48.521583 1134500 main.go:141] libmachine: (newest-cni-997491)     </disk>
	I0318 14:40:48.521597 1134500 main.go:141] libmachine: (newest-cni-997491)     <disk type='file' device='disk'>
	I0318 14:40:48.521609 1134500 main.go:141] libmachine: (newest-cni-997491)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:40:48.521643 1134500 main.go:141] libmachine: (newest-cni-997491)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/newest-cni-997491.rawdisk'/>
	I0318 14:40:48.521667 1134500 main.go:141] libmachine: (newest-cni-997491)       <target dev='hda' bus='virtio'/>
	I0318 14:40:48.521687 1134500 main.go:141] libmachine: (newest-cni-997491)     </disk>
	I0318 14:40:48.521706 1134500 main.go:141] libmachine: (newest-cni-997491)     <interface type='network'>
	I0318 14:40:48.521719 1134500 main.go:141] libmachine: (newest-cni-997491)       <source network='mk-newest-cni-997491'/>
	I0318 14:40:48.521730 1134500 main.go:141] libmachine: (newest-cni-997491)       <model type='virtio'/>
	I0318 14:40:48.521740 1134500 main.go:141] libmachine: (newest-cni-997491)     </interface>
	I0318 14:40:48.521753 1134500 main.go:141] libmachine: (newest-cni-997491)     <interface type='network'>
	I0318 14:40:48.521769 1134500 main.go:141] libmachine: (newest-cni-997491)       <source network='default'/>
	I0318 14:40:48.521781 1134500 main.go:141] libmachine: (newest-cni-997491)       <model type='virtio'/>
	I0318 14:40:48.521796 1134500 main.go:141] libmachine: (newest-cni-997491)     </interface>
	I0318 14:40:48.521806 1134500 main.go:141] libmachine: (newest-cni-997491)     <serial type='pty'>
	I0318 14:40:48.521817 1134500 main.go:141] libmachine: (newest-cni-997491)       <target port='0'/>
	I0318 14:40:48.521827 1134500 main.go:141] libmachine: (newest-cni-997491)     </serial>
	I0318 14:40:48.521835 1134500 main.go:141] libmachine: (newest-cni-997491)     <console type='pty'>
	I0318 14:40:48.521844 1134500 main.go:141] libmachine: (newest-cni-997491)       <target type='serial' port='0'/>
	I0318 14:40:48.521863 1134500 main.go:141] libmachine: (newest-cni-997491)     </console>
	I0318 14:40:48.521874 1134500 main.go:141] libmachine: (newest-cni-997491)     <rng model='virtio'>
	I0318 14:40:48.521888 1134500 main.go:141] libmachine: (newest-cni-997491)       <backend model='random'>/dev/random</backend>
	I0318 14:40:48.521902 1134500 main.go:141] libmachine: (newest-cni-997491)     </rng>
	I0318 14:40:48.521913 1134500 main.go:141] libmachine: (newest-cni-997491)     
	I0318 14:40:48.521920 1134500 main.go:141] libmachine: (newest-cni-997491)     
	I0318 14:40:48.521933 1134500 main.go:141] libmachine: (newest-cni-997491)   </devices>
	I0318 14:40:48.521942 1134500 main.go:141] libmachine: (newest-cni-997491) </domain>
	I0318 14:40:48.521951 1134500 main.go:141] libmachine: (newest-cni-997491) 
	I0318 14:40:48.526550 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:eb:3f:47 in network default
	I0318 14:40:48.527182 1134500 main.go:141] libmachine: (newest-cni-997491) Ensuring networks are active...
	I0318 14:40:48.527206 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:48.528002 1134500 main.go:141] libmachine: (newest-cni-997491) Ensuring network default is active
	I0318 14:40:48.528396 1134500 main.go:141] libmachine: (newest-cni-997491) Ensuring network mk-newest-cni-997491 is active
	I0318 14:40:48.529049 1134500 main.go:141] libmachine: (newest-cni-997491) Getting domain xml...
	I0318 14:40:48.529912 1134500 main.go:141] libmachine: (newest-cni-997491) Creating domain...
	I0318 14:40:49.800134 1134500 main.go:141] libmachine: (newest-cni-997491) Waiting to get IP...
	I0318 14:40:49.800899 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:49.801330 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:49.801388 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:49.801321 1134524 retry.go:31] will retry after 215.972164ms: waiting for machine to come up
	I0318 14:40:50.019019 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:50.019615 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:50.019650 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:50.019556 1134524 retry.go:31] will retry after 302.703358ms: waiting for machine to come up
	I0318 14:40:50.324237 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:50.324788 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:50.324820 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:50.324717 1134524 retry.go:31] will retry after 424.444672ms: waiting for machine to come up
	I0318 14:40:50.750250 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:50.750833 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:50.750868 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:50.750787 1134524 retry.go:31] will retry after 550.56941ms: waiting for machine to come up
	I0318 14:40:51.302390 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:51.302856 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:51.302880 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:51.302799 1134524 retry.go:31] will retry after 472.696783ms: waiting for machine to come up
	I0318 14:40:51.777568 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:51.777993 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:51.778024 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:51.777945 1134524 retry.go:31] will retry after 949.389477ms: waiting for machine to come up
	I0318 14:40:52.728902 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:52.729381 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:52.729414 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:52.729314 1134524 retry.go:31] will retry after 1.029751384s: waiting for machine to come up
	I0318 14:40:53.760875 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:53.761368 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:53.761395 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:53.761322 1134524 retry.go:31] will retry after 1.197480841s: waiting for machine to come up
	I0318 14:40:54.960787 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:54.961279 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:54.961311 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:54.961231 1134524 retry.go:31] will retry after 1.575956051s: waiting for machine to come up
	I0318 14:40:56.538939 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:56.539394 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:56.539424 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:56.539333 1134524 retry.go:31] will retry after 1.553381087s: waiting for machine to come up
	I0318 14:40:58.095145 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:58.095630 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:58.095664 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:58.095569 1134524 retry.go:31] will retry after 1.779999121s: waiting for machine to come up
	I0318 14:40:59.877035 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:59.877575 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:59.877609 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:59.877519 1134524 retry.go:31] will retry after 2.375135175s: waiting for machine to come up
	I0318 14:41:02.254060 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:02.254541 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:41:02.254578 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:41:02.254475 1134524 retry.go:31] will retry after 3.82072828s: waiting for machine to come up
	I0318 14:41:06.078483 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:06.078996 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:41:06.079034 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:41:06.078925 1134524 retry.go:31] will retry after 5.53631033s: waiting for machine to come up
	I0318 14:41:11.616672 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.617186 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has current primary IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.617214 1134500 main.go:141] libmachine: (newest-cni-997491) Found IP for machine: 192.168.50.192
	I0318 14:41:11.617228 1134500 main.go:141] libmachine: (newest-cni-997491) Reserving static IP address...
	I0318 14:41:11.617749 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find host DHCP lease matching {name: "newest-cni-997491", mac: "52:54:00:f9:f7:0a", ip: "192.168.50.192"} in network mk-newest-cni-997491
	I0318 14:41:11.705652 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Getting to WaitForSSH function...
	I0318 14:41:11.705683 1134500 main.go:141] libmachine: (newest-cni-997491) Reserved static IP address: 192.168.50.192
	I0318 14:41:11.705696 1134500 main.go:141] libmachine: (newest-cni-997491) Waiting for SSH to be available...
	I0318 14:41:11.708876 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.709338 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:11.709372 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.709527 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Using SSH client type: external
	I0318 14:41:11.709561 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa (-rw-------)
	I0318 14:41:11.709593 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:41:11.709609 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | About to run SSH command:
	I0318 14:41:11.709623 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | exit 0
	I0318 14:41:11.840236 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | SSH cmd err, output: <nil>: 
	I0318 14:41:11.840525 1134500 main.go:141] libmachine: (newest-cni-997491) KVM machine creation complete!
	I0318 14:41:11.840865 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetConfigRaw
	I0318 14:41:11.841390 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:11.841627 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:11.841848 1134500 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 14:41:11.841866 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetState
	I0318 14:41:11.843209 1134500 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 14:41:11.843224 1134500 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 14:41:11.843245 1134500 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 14:41:11.843252 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:11.845747 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.846058 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:11.846096 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.846268 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:11.846440 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:11.846631 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:11.846815 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:11.846997 1134500 main.go:141] libmachine: Using SSH client type: native
	I0318 14:41:11.847262 1134500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0318 14:41:11.847277 1134500 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 14:41:11.955940 1134500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:41:11.955980 1134500 main.go:141] libmachine: Detecting the provisioner...
	I0318 14:41:11.955992 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:11.959200 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.959539 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:11.959581 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:11.959749 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:11.960035 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:11.960224 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:11.960435 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:11.960660 1134500 main.go:141] libmachine: Using SSH client type: native
	I0318 14:41:11.960839 1134500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0318 14:41:11.960850 1134500 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 14:41:12.077024 1134500 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 14:41:12.077201 1134500 main.go:141] libmachine: found compatible host: buildroot
	I0318 14:41:12.077217 1134500 main.go:141] libmachine: Provisioning with buildroot...
	I0318 14:41:12.077229 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetMachineName
	I0318 14:41:12.077593 1134500 buildroot.go:166] provisioning hostname "newest-cni-997491"
	I0318 14:41:12.077629 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetMachineName
	I0318 14:41:12.077863 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:12.080849 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.081228 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:12.081267 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.081453 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:12.081680 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.081838 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.081996 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:12.082193 1134500 main.go:141] libmachine: Using SSH client type: native
	I0318 14:41:12.082432 1134500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0318 14:41:12.082454 1134500 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-997491 && echo "newest-cni-997491" | sudo tee /etc/hostname
	I0318 14:41:12.208173 1134500 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-997491
	
	I0318 14:41:12.208206 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:12.211187 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.211586 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:12.211631 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.211870 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:12.212147 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.212360 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.212506 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:12.212708 1134500 main.go:141] libmachine: Using SSH client type: native
	I0318 14:41:12.212885 1134500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0318 14:41:12.212918 1134500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-997491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-997491/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-997491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:41:12.333381 1134500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:41:12.333435 1134500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:41:12.333472 1134500 buildroot.go:174] setting up certificates
	I0318 14:41:12.333488 1134500 provision.go:84] configureAuth start
	I0318 14:41:12.333507 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetMachineName
	I0318 14:41:12.333875 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetIP
	I0318 14:41:12.336717 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.337018 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:12.337041 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.337201 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:12.339509 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.339898 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:12.339926 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.340103 1134500 provision.go:143] copyHostCerts
	I0318 14:41:12.340193 1134500 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:41:12.340206 1134500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:41:12.340271 1134500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:41:12.340360 1134500 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:41:12.340368 1134500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:41:12.340393 1134500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:41:12.340442 1134500 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:41:12.340449 1134500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:41:12.340469 1134500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:41:12.340512 1134500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.newest-cni-997491 san=[127.0.0.1 192.168.50.192 localhost minikube newest-cni-997491]
	I0318 14:41:12.740056 1134500 provision.go:177] copyRemoteCerts
	I0318 14:41:12.740133 1134500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:41:12.740162 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:12.742971 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.743347 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:12.743376 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.743621 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:12.743870 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.744086 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:12.744229 1134500 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa Username:docker}
	I0318 14:41:12.831371 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:41:12.859938 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:41:12.887504 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 14:41:12.914321 1134500 provision.go:87] duration metric: took 580.812659ms to configureAuth
	I0318 14:41:12.914358 1134500 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:41:12.914583 1134500 config.go:182] Loaded profile config "newest-cni-997491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:41:12.914671 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:12.917758 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.918153 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:12.918186 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:12.918352 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:12.918558 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.918707 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:12.918891 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:12.919092 1134500 main.go:141] libmachine: Using SSH client type: native
	I0318 14:41:12.919261 1134500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0318 14:41:12.919278 1134500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:41:13.232090 1134500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:41:13.232124 1134500 main.go:141] libmachine: Checking connection to Docker...
	I0318 14:41:13.232143 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetURL
	I0318 14:41:13.233744 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Using libvirt version 6000000
	I0318 14:41:13.236982 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.237455 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.237490 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.237733 1134500 main.go:141] libmachine: Docker is up and running!
	I0318 14:41:13.237755 1134500 main.go:141] libmachine: Reticulating splines...
	I0318 14:41:13.237765 1134500 client.go:171] duration metric: took 25.273884272s to LocalClient.Create
	I0318 14:41:13.237803 1134500 start.go:167] duration metric: took 25.274004418s to libmachine.API.Create "newest-cni-997491"
	I0318 14:41:13.237817 1134500 start.go:293] postStartSetup for "newest-cni-997491" (driver="kvm2")
	I0318 14:41:13.237838 1134500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:41:13.237867 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:13.238144 1134500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:41:13.238169 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:13.240883 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.241309 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.241336 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.241468 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:13.241692 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:13.241893 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:13.242136 1134500 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa Username:docker}
	I0318 14:41:13.327552 1134500 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:41:13.332777 1134500 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:41:13.332820 1134500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:41:13.332941 1134500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:41:13.333046 1134500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:41:13.333193 1134500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:41:13.345180 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:41:13.376760 1134500 start.go:296] duration metric: took 138.9227ms for postStartSetup
	I0318 14:41:13.376821 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetConfigRaw
	I0318 14:41:13.377478 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetIP
	I0318 14:41:13.380582 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.380977 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.381008 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.381361 1134500 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/config.json ...
	I0318 14:41:13.381626 1134500 start.go:128] duration metric: took 25.439869227s to createHost
	I0318 14:41:13.381654 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:13.384317 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.384712 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.384747 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.384896 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:13.385124 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:13.385300 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:13.385438 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:13.385606 1134500 main.go:141] libmachine: Using SSH client type: native
	I0318 14:41:13.385786 1134500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0318 14:41:13.385798 1134500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:41:13.500940 1134500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710772873.473231831
	
	I0318 14:41:13.500973 1134500 fix.go:216] guest clock: 1710772873.473231831
	I0318 14:41:13.500985 1134500 fix.go:229] Guest: 2024-03-18 14:41:13.473231831 +0000 UTC Remote: 2024-03-18 14:41:13.381641787 +0000 UTC m=+25.574305119 (delta=91.590044ms)
	I0318 14:41:13.501088 1134500 fix.go:200] guest clock delta is within tolerance: 91.590044ms
	I0318 14:41:13.501095 1134500 start.go:83] releasing machines lock for "newest-cni-997491", held for 25.559467907s
	I0318 14:41:13.501125 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:13.501472 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetIP
	I0318 14:41:13.504633 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.504980 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.505017 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.505282 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:13.505954 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:13.506169 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:13.506250 1134500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:41:13.506303 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:13.506437 1134500 ssh_runner.go:195] Run: cat /version.json
	I0318 14:41:13.506469 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:13.509437 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.509472 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.509835 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.509863 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.509958 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:13.509997 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:13.510034 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:13.510210 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:13.510211 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:13.510397 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:13.510402 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:13.510591 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:13.510590 1134500 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa Username:docker}
	I0318 14:41:13.510742 1134500 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa Username:docker}
	I0318 14:41:13.597806 1134500 ssh_runner.go:195] Run: systemctl --version
	I0318 14:41:13.624729 1134500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:41:13.796699 1134500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:41:13.804392 1134500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:41:13.804472 1134500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:41:13.822382 1134500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:41:13.822422 1134500 start.go:494] detecting cgroup driver to use...
	I0318 14:41:13.822500 1134500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:41:13.840562 1134500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:41:13.856999 1134500 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:41:13.857067 1134500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:41:13.873327 1134500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:41:13.889951 1134500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:41:14.020050 1134500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:41:14.175538 1134500 docker.go:233] disabling docker service ...
	I0318 14:41:14.175619 1134500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:41:14.191889 1134500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:41:14.206037 1134500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:41:14.354391 1134500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:41:14.499767 1134500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:41:14.515281 1134500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:41:14.536519 1134500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:41:14.536579 1134500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:41:14.548888 1134500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:41:14.548983 1134500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:41:14.561479 1134500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:41:14.574313 1134500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:41:14.586728 1134500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:41:14.599114 1134500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:41:14.610307 1134500 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:41:14.610382 1134500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:41:14.625832 1134500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:41:14.638723 1134500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:41:14.761801 1134500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:41:14.919547 1134500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:41:14.919645 1134500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:41:14.925455 1134500 start.go:562] Will wait 60s for crictl version
	I0318 14:41:14.925533 1134500 ssh_runner.go:195] Run: which crictl
	I0318 14:41:14.930054 1134500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:41:14.971870 1134500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:41:14.971952 1134500 ssh_runner.go:195] Run: crio --version
	I0318 14:41:15.004155 1134500 ssh_runner.go:195] Run: crio --version
	I0318 14:41:15.040889 1134500 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:41:15.042488 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetIP
	I0318 14:41:15.045217 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:15.045642 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:15.045674 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:15.045878 1134500 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:41:15.050736 1134500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:41:15.067901 1134500 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0318 14:41:15.069438 1134500 kubeadm.go:877] updating cluster {Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:41:15.069600 1134500 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:41:15.069702 1134500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:41:15.107603 1134500 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:41:15.107683 1134500 ssh_runner.go:195] Run: which lz4
	I0318 14:41:15.112416 1134500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:41:15.117698 1134500 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:41:15.117739 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0318 14:41:16.800967 1134500 crio.go:444] duration metric: took 1.688598425s to copy over tarball
	I0318 14:41:16.801098 1134500 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:41:19.224356 1134500 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.423194317s)
	I0318 14:41:19.224388 1134500 crio.go:451] duration metric: took 2.423382213s to extract the tarball
	I0318 14:41:19.224396 1134500 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:41:19.265370 1134500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:41:19.315426 1134500 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:41:19.315455 1134500 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:41:19.315463 1134500 kubeadm.go:928] updating node { 192.168.50.192 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:41:19.315593 1134500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-997491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:41:19.315662 1134500 ssh_runner.go:195] Run: crio config
	I0318 14:41:19.365447 1134500 cni.go:84] Creating CNI manager for ""
	I0318 14:41:19.365472 1134500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:41:19.365485 1134500 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0318 14:41:19.365511 1134500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.192 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-997491 NodeName:newest-cni-997491 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.50.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:41:19.365641 1134500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-997491"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:41:19.365703 1134500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:41:19.377003 1134500 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:41:19.377083 1134500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:41:19.388058 1134500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0318 14:41:19.407865 1134500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:41:19.426574 1134500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0318 14:41:19.445508 1134500 ssh_runner.go:195] Run: grep 192.168.50.192	control-plane.minikube.internal$ /etc/hosts
	I0318 14:41:19.449836 1134500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:41:19.464741 1134500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:41:19.597650 1134500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:41:19.627677 1134500 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491 for IP: 192.168.50.192
	I0318 14:41:19.627735 1134500 certs.go:194] generating shared ca certs ...
	I0318 14:41:19.627764 1134500 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:19.627980 1134500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:41:19.628036 1134500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:41:19.628051 1134500 certs.go:256] generating profile certs ...
	I0318 14:41:19.628134 1134500 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/client.key
	I0318 14:41:19.628154 1134500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/client.crt with IP's: []
	I0318 14:41:19.751800 1134500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/client.crt ...
	I0318 14:41:19.751864 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/client.crt: {Name:mk2def4db6a72c7ee3d68a3ab7aecf9b5aa77942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:19.752075 1134500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/client.key ...
	I0318 14:41:19.752104 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/client.key: {Name:mkcc15467cdffd41ff74c074c0182eb45e7d3702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:19.752218 1134500 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.key.f0fbddc4
	I0318 14:41:19.752235 1134500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.crt.f0fbddc4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.192]
	I0318 14:41:19.897080 1134500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.crt.f0fbddc4 ...
	I0318 14:41:19.897116 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.crt.f0fbddc4: {Name:mk90585878a760f67e7feb15d7722516586d50bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:19.897289 1134500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.key.f0fbddc4 ...
	I0318 14:41:19.897303 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.key.f0fbddc4: {Name:mkf9b7a938aff18c9bd5513ffcf0d466b37a2c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:19.897373 1134500 certs.go:381] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.crt.f0fbddc4 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.crt
	I0318 14:41:19.897465 1134500 certs.go:385] copying /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.key.f0fbddc4 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.key
	I0318 14:41:19.897523 1134500 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.key
	I0318 14:41:19.897541 1134500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.crt with IP's: []
	I0318 14:41:20.046601 1134500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.crt ...
	I0318 14:41:20.046637 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.crt: {Name:mkb4bbe19cff2196c4a4e1f3a2621ea534327895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:20.046813 1134500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.key ...
	I0318 14:41:20.046827 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.key: {Name:mkfecee56e08614ae2ba5c54baf1b58840fe2db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:20.047016 1134500 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:41:20.047053 1134500 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:41:20.047060 1134500 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:41:20.047085 1134500 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:41:20.047112 1134500 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:41:20.047132 1134500 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:41:20.047167 1134500 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:41:20.047816 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:41:20.074472 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:41:20.107393 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:41:20.137069 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:41:20.165729 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:41:20.195429 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:41:20.224211 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:41:20.251587 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:41:20.280508 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:41:20.307099 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:41:20.334286 1134500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:41:20.362411 1134500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:41:20.381521 1134500 ssh_runner.go:195] Run: openssl version
	I0318 14:41:20.387945 1134500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:41:20.402278 1134500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:41:20.408511 1134500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:41:20.408587 1134500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:41:20.423599 1134500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:41:20.443019 1134500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:41:20.456071 1134500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:41:20.462005 1134500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:41:20.462088 1134500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:41:20.468985 1134500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:41:20.486691 1134500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:41:20.500804 1134500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:41:20.506253 1134500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:41:20.506333 1134500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:41:20.512728 1134500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:41:20.525650 1134500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:41:20.530158 1134500 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 14:41:20.530221 1134500 kubeadm.go:391] StartCluster: {Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:41:20.530323 1134500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:41:20.530385 1134500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:41:20.571814 1134500 cri.go:89] found id: ""
	I0318 14:41:20.571930 1134500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 14:41:20.583471 1134500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:41:20.595544 1134500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:41:20.608284 1134500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:41:20.608307 1134500 kubeadm.go:156] found existing configuration files:
	
	I0318 14:41:20.608370 1134500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:41:20.622729 1134500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:41:20.622810 1134500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:41:20.635692 1134500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:41:20.648123 1134500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:41:20.648206 1134500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:41:20.661291 1134500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:41:20.673462 1134500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:41:20.673524 1134500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:41:20.686249 1134500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:41:20.698335 1134500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:41:20.698402 1134500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:41:20.711097 1134500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:41:20.972676 1134500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:41:31.088712 1134500 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:41:31.088791 1134500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:41:31.088905 1134500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:41:31.089050 1134500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:41:31.089216 1134500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:41:31.089321 1134500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:41:31.091214 1134500 out.go:204]   - Generating certificates and keys ...
	I0318 14:41:31.091316 1134500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:41:31.091413 1134500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:41:31.091482 1134500 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 14:41:31.091554 1134500 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 14:41:31.091623 1134500 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 14:41:31.091678 1134500 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 14:41:31.091744 1134500 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 14:41:31.091953 1134500 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-997491] and IPs [192.168.50.192 127.0.0.1 ::1]
	I0318 14:41:31.092067 1134500 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 14:41:31.092236 1134500 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-997491] and IPs [192.168.50.192 127.0.0.1 ::1]
	I0318 14:41:31.092332 1134500 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 14:41:31.092393 1134500 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 14:41:31.092436 1134500 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 14:41:31.092493 1134500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:41:31.092570 1134500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:41:31.092679 1134500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:41:31.092760 1134500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:41:31.092869 1134500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:41:31.092944 1134500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:41:31.093049 1134500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:41:31.093135 1134500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:41:31.095091 1134500 out.go:204]   - Booting up control plane ...
	I0318 14:41:31.095214 1134500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:41:31.095320 1134500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:41:31.095404 1134500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:41:31.095579 1134500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:41:31.095711 1134500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:41:31.095775 1134500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:41:31.095978 1134500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:41:31.096096 1134500 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002830 seconds
	I0318 14:41:31.096243 1134500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:41:31.096423 1134500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:41:31.096506 1134500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:41:31.096761 1134500 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-997491 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:41:31.096850 1134500 kubeadm.go:309] [bootstrap-token] Using token: p3244e.jq6qqftor3nnbcio
	I0318 14:41:31.098375 1134500 out.go:204]   - Configuring RBAC rules ...
	I0318 14:41:31.098516 1134500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:41:31.098658 1134500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:41:31.098848 1134500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:41:31.099047 1134500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:41:31.099192 1134500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:41:31.099334 1134500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:41:31.099527 1134500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:41:31.099576 1134500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:41:31.099641 1134500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:41:31.099652 1134500 kubeadm.go:309] 
	I0318 14:41:31.099737 1134500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:41:31.099746 1134500 kubeadm.go:309] 
	I0318 14:41:31.099864 1134500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:41:31.099874 1134500 kubeadm.go:309] 
	I0318 14:41:31.099910 1134500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:41:31.100002 1134500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:41:31.100074 1134500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:41:31.100083 1134500 kubeadm.go:309] 
	I0318 14:41:31.100184 1134500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:41:31.100196 1134500 kubeadm.go:309] 
	I0318 14:41:31.100269 1134500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:41:31.100278 1134500 kubeadm.go:309] 
	I0318 14:41:31.100344 1134500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:41:31.100446 1134500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:41:31.100557 1134500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:41:31.100566 1134500 kubeadm.go:309] 
	I0318 14:41:31.100677 1134500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:41:31.100807 1134500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:41:31.100826 1134500 kubeadm.go:309] 
	I0318 14:41:31.100921 1134500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p3244e.jq6qqftor3nnbcio \
	I0318 14:41:31.101050 1134500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:41:31.101077 1134500 kubeadm.go:309] 	--control-plane 
	I0318 14:41:31.101085 1134500 kubeadm.go:309] 
	I0318 14:41:31.101201 1134500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:41:31.101220 1134500 kubeadm.go:309] 
	I0318 14:41:31.101329 1134500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p3244e.jq6qqftor3nnbcio \
	I0318 14:41:31.101476 1134500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:41:31.101499 1134500 cni.go:84] Creating CNI manager for ""
	I0318 14:41:31.101508 1134500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:41:31.103356 1134500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:41:31.104774 1134500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:41:31.176549 1134500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:41:31.212234 1134500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:41:31.212407 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:31.212407 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-997491 minikube.k8s.io/updated_at=2024_03_18T14_41_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=newest-cni-997491 minikube.k8s.io/primary=true
	I0318 14:41:31.515972 1134500 ops.go:34] apiserver oom_adj: -16
	I0318 14:41:31.516028 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:32.016518 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:32.516875 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:33.017046 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:33.516698 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:34.016788 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:34.516462 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:35.016136 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:35.516812 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:36.016553 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:36.516672 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:37.016401 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:37.517062 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:38.016431 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:38.516717 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:39.016137 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:39.517085 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:40.016127 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:40.517061 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:41.017070 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:41.517054 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:42.016426 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:42.516431 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:43.016485 1134500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:41:43.146954 1134500 kubeadm.go:1107] duration metric: took 11.934621493s to wait for elevateKubeSystemPrivileges
	W0318 14:41:43.146999 1134500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:41:43.147009 1134500 kubeadm.go:393] duration metric: took 22.616794179s to StartCluster
	I0318 14:41:43.147034 1134500 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:43.147139 1134500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:41:43.149005 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:41:43.149302 1134500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 14:41:43.149315 1134500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:41:43.149393 1134500 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-997491"
	I0318 14:41:43.149426 1134500 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-997491"
	I0318 14:41:43.149288 1134500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:41:43.150780 1134500 out.go:177] * Verifying Kubernetes components...
	I0318 14:41:43.149553 1134500 config.go:182] Loaded profile config "newest-cni-997491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:41:43.149454 1134500 addons.go:69] Setting default-storageclass=true in profile "newest-cni-997491"
	I0318 14:41:43.149456 1134500 host.go:66] Checking if "newest-cni-997491" exists ...
	I0318 14:41:43.152710 1134500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-997491"
	I0318 14:41:43.152757 1134500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:41:43.153073 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:43.153104 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:43.153118 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:43.153161 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:43.174411 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0318 14:41:43.174924 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:43.175628 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:41:43.175688 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:43.175933 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0318 14:41:43.176058 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:43.176535 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:43.176591 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetState
	I0318 14:41:43.177095 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:41:43.177155 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:43.177509 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:43.178179 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:43.178203 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:43.180473 1134500 addons.go:234] Setting addon default-storageclass=true in "newest-cni-997491"
	I0318 14:41:43.180510 1134500 host.go:66] Checking if "newest-cni-997491" exists ...
	I0318 14:41:43.180823 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:43.180852 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:43.195399 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0318 14:41:43.196272 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:43.196853 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:41:43.196887 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:43.197340 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:43.197604 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetState
	I0318 14:41:43.198817 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37691
	I0318 14:41:43.199420 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:43.199557 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:43.202620 1134500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:41:43.199977 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:41:43.202668 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:43.203084 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:43.204535 1134500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:41:43.204555 1134500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:41:43.204577 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:43.205265 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:43.205326 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:43.208639 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:43.209142 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:43.209181 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:43.209393 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:43.209660 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:43.209872 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:43.210242 1134500 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa Username:docker}
	I0318 14:41:43.228151 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0318 14:41:43.228674 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:43.229334 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:41:43.229368 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:43.229820 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:43.230073 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetState
	I0318 14:41:43.232138 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:43.232451 1134500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:41:43.232476 1134500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:41:43.232498 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHHostname
	I0318 14:41:43.235540 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:43.235894 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:0a", ip: ""} in network mk-newest-cni-997491: {Iface:virbr1 ExpiryTime:2024-03-18 15:41:03 +0000 UTC Type:0 Mac:52:54:00:f9:f7:0a Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:newest-cni-997491 Clientid:01:52:54:00:f9:f7:0a}
	I0318 14:41:43.235936 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined IP address 192.168.50.192 and MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:43.236219 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHPort
	I0318 14:41:43.236433 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHKeyPath
	I0318 14:41:43.236615 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetSSHUsername
	I0318 14:41:43.236851 1134500 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa Username:docker}
	I0318 14:41:43.388534 1134500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 14:41:43.416985 1134500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:41:43.559774 1134500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:41:43.575187 1134500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:41:44.047414 1134500 start.go:948] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0318 14:41:44.048913 1134500 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:41:44.048990 1134500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:41:44.551961 1134500 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-997491" context rescaled to 1 replicas
	I0318 14:41:44.623585 1134500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063757881s)
	I0318 14:41:44.623655 1134500 main.go:141] libmachine: Making call to close driver server
	I0318 14:41:44.623669 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Close
	I0318 14:41:44.623671 1134500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.048432642s)
	I0318 14:41:44.623751 1134500 main.go:141] libmachine: Making call to close driver server
	I0318 14:41:44.623815 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Close
	I0318 14:41:44.623751 1134500 api_server.go:72] duration metric: took 1.474249679s to wait for apiserver process to appear ...
	I0318 14:41:44.623906 1134500 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:41:44.623933 1134500 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0318 14:41:44.624027 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Closing plugin on server side
	I0318 14:41:44.624030 1134500 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:41:44.624065 1134500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:41:44.624075 1134500 main.go:141] libmachine: Making call to close driver server
	I0318 14:41:44.624082 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Close
	I0318 14:41:44.624156 1134500 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:41:44.624167 1134500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:41:44.624182 1134500 main.go:141] libmachine: Making call to close driver server
	I0318 14:41:44.624178 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Closing plugin on server side
	I0318 14:41:44.624189 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Close
	I0318 14:41:44.624293 1134500 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:41:44.624303 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Closing plugin on server side
	I0318 14:41:44.624305 1134500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:41:44.624546 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Closing plugin on server side
	I0318 14:41:44.624638 1134500 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:41:44.624648 1134500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:41:44.638273 1134500 api_server.go:279] https://192.168.50.192:8443/healthz returned 200:
	ok
	I0318 14:41:44.639898 1134500 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:41:44.639948 1134500 api_server.go:131] duration metric: took 16.029716ms to wait for apiserver health ...
	I0318 14:41:44.639960 1134500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:41:44.651202 1134500 main.go:141] libmachine: Making call to close driver server
	I0318 14:41:44.651234 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Close
	I0318 14:41:44.651632 1134500 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:41:44.651656 1134500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:41:44.653380 1134500 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 14:41:44.654603 1134500 addons.go:505] duration metric: took 1.505277635s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 14:41:44.664492 1134500 system_pods.go:59] 8 kube-system pods found
	I0318 14:41:44.664534 1134500 system_pods.go:61] "coredns-76f75df574-7rgpl" [fb16e218-fb47-4273-a198-fc36dcb7a835] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:41:44.664542 1134500 system_pods.go:61] "coredns-76f75df574-glt2g" [c9b986eb-e673-4724-9302-412d3afd7f4e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:41:44.664549 1134500 system_pods.go:61] "etcd-newest-cni-997491" [c8af5f31-7d0e-4c10-9535-6f968447aaf4] Running
	I0318 14:41:44.664554 1134500 system_pods.go:61] "kube-apiserver-newest-cni-997491" [2915ecca-e986-4423-83ea-84b111016ead] Running
	I0318 14:41:44.664557 1134500 system_pods.go:61] "kube-controller-manager-newest-cni-997491" [e7c1d936-9fe7-4deb-8187-eb16941b0994] Running
	I0318 14:41:44.664561 1134500 system_pods.go:61] "kube-proxy-2qmx2" [c08d8732-1195-4ac3-9066-641004c9e385] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:41:44.664565 1134500 system_pods.go:61] "kube-scheduler-newest-cni-997491" [3f27bba8-f8f2-4bf8-ab6d-1d1b3703eba8] Running
	I0318 14:41:44.664572 1134500 system_pods.go:61] "storage-provisioner" [18ecfc66-5d5e-4f89-9499-7bf21d80f0c5] Pending
	I0318 14:41:44.664579 1134500 system_pods.go:74] duration metric: took 24.611321ms to wait for pod list to return data ...
	I0318 14:41:44.664586 1134500 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:41:44.670893 1134500 default_sa.go:45] found service account: "default"
	I0318 14:41:44.670924 1134500 default_sa.go:55] duration metric: took 6.329902ms for default service account to be created ...
	I0318 14:41:44.670939 1134500 kubeadm.go:576] duration metric: took 1.521441105s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:41:44.670956 1134500 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:41:44.688711 1134500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:41:44.688748 1134500 node_conditions.go:123] node cpu capacity is 2
	I0318 14:41:44.688766 1134500 node_conditions.go:105] duration metric: took 17.804095ms to run NodePressure ...
	I0318 14:41:44.688783 1134500 start.go:240] waiting for startup goroutines ...
	I0318 14:41:44.688793 1134500 start.go:245] waiting for cluster config update ...
	I0318 14:41:44.688807 1134500 start.go:254] writing updated cluster config ...
	I0318 14:41:44.689166 1134500 ssh_runner.go:195] Run: rm -f paused
	I0318 14:41:44.774710 1134500 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:41:44.776477 1134500 out.go:177] * Done! kubectl is now configured to use "newest-cni-997491" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.225982446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772917225957646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfc5e0c6-b800-425f-9843-82b4117aacef name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.226800706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57043eaf-731a-4518-ab19-33e565b786e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.226855430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57043eaf-731a-4518-ab19-33e565b786e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.227054105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57043eaf-731a-4518-ab19-33e565b786e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.283757381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2d66c98-c316-4ce8-9363-73e56c8e9c30 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.283859521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2d66c98-c316-4ce8-9363-73e56c8e9c30 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.286780087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6199cb7-75f4-4e5c-afe2-5c68e1040102 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.287344780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772917287310315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6199cb7-75f4-4e5c-afe2-5c68e1040102 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.288479288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cb93b93-21d2-424c-8586-defbc35ffa9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.288924440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cb93b93-21d2-424c-8586-defbc35ffa9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.289221348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cb93b93-21d2-424c-8586-defbc35ffa9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.343171340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9ef340d-ae25-43fd-a1d2-f439cecf0fe9 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.343268202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9ef340d-ae25-43fd-a1d2-f439cecf0fe9 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.345056537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f29162e0-5108-4130-a13e-969e4899b502 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.346342421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772917346306166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f29162e0-5108-4130-a13e-969e4899b502 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.348040827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=984ae7aa-87b9-435c-974d-1ff04fe48a70 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.348117198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=984ae7aa-87b9-435c-974d-1ff04fe48a70 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.348372305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=984ae7aa-87b9-435c-974d-1ff04fe48a70 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.398646785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea299748-0316-4239-9083-6a2cdb5c2b97 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.398814027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea299748-0316-4239-9083-6a2cdb5c2b97 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.403975369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6e1d32a-0e3b-4c00-9cf7-04953087109e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.404443237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772917404360741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6e1d32a-0e3b-4c00-9cf7-04953087109e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.405665417Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f52e62f-494e-48d8-80a5-8eb256ed8ae8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.405725439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f52e62f-494e-48d8-80a5-8eb256ed8ae8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:57 embed-certs-767719 crio[694]: time="2024-03-18 14:41:57.405927082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07,PodSandboxId:169cb175ee20db264ecbcc3a7520202f58031191dbd0f9d96f00def65c5e1342,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771979133803181,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aaa79fa-95b2-40d3-af0c-db60292f77e3,},Annotations:map[string]string{io.kubernetes.container.hash: 32147c6c,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0,PodSandboxId:fd6356e98c68f7d1d419c2af5a512d68e2f0dac903234a215f01521c1eaa8d69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771976991935985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4knv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2afcd2a-a9a3-494b-8f2b-c532cd60a569,},Annotations:map[string]string{io.kubernetes.container.hash: b1c442f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc,PodSandboxId:9100da1d021db6c122d322af0877623cffdf09b84b95031985570fc47208b9f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771977037242483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fm52r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
8d62bd5-d44a-4de7-a73c-7cd615b34470,},Annotations:map[string]string{io.kubernetes.container.hash: 1be32d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a,PodSandboxId:97c5545ad25814d29cea55b2d18fdd7d7b2ddd66668b4eb048e2dd08d2bc3323,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710771976477657420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90d43cdd-0e1d-4158-9403-91bb7b556f70,},Annotations:map[string]string{io.kubernetes.container.hash: 312cd0ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a,PodSandboxId:18165a55b415b62a5225475b069e17c4116523a9d25fa1e2f821ae592e448467,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771957435610006,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c19e521dbd99b569b23aeba612d73c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13,PodSandboxId:99b1a503f3be39f1d359dae878a8508e767767cfba4e3fd1dd86c7de10b319c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771957371605643,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc,PodSandboxId:d825a07ba7fca67c031a5ac3f2ea03cd2dd42aeeaca7de3181a1a333c9413cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710771957397158494,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8669cf547316ec4dc91eaa007d2b3839,},Annotations:map[string]string{io.kubernetes.container.hash: f7ca46e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67,PodSandboxId:5e36507d9de51336a71f56211432bf9e31a61577b5ea567614463df898470e9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710771957346213706,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b6945ac6b0392727f3194f0635bd6c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd,PodSandboxId:a1b555e34c58a059c0ad289c8f270ec2e4ebee2fa9b22839698939ba4debcc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710771663535583956,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9c7d8592454562124097157bffaeb4,},Annotations:map[string]string{io.kubernetes.container.hash: f8bbca72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f52e62f-494e-48d8-80a5-8eb256ed8ae8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1344f16b5555a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   169cb175ee20d       storage-provisioner
	12b542e08e9c0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   9100da1d021db       coredns-5dd5756b68-fm52r
	e846717910305       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   fd6356e98c68f       coredns-5dd5756b68-4knv5
	9668408c6e663       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   97c5545ad2581       kube-proxy-f4547
	a7a1b030bde32       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   18165a55b415b       kube-scheduler-embed-certs-767719
	5c340aa58af70       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   d825a07ba7fca       etcd-embed-certs-767719
	f219192018995       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   99b1a503f3be3       kube-apiserver-embed-certs-767719
	43bf3db5a22cc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   5e36507d9de51       kube-controller-manager-embed-certs-767719
	3290b4ce5efac       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   20 minutes ago      Exited              kube-apiserver            1                   a1b555e34c58a       kube-apiserver-embed-certs-767719
	
	
	==> coredns [12b542e08e9c09dff3b3bbc84f43ad19670e25229602837fa457a544312a38dc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [e8467179103054fb6bd5637c3eae01e238b8e692b8f098eadf5dd2fe216e9ea0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-767719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-767719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=embed-certs-767719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:26:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-767719
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:41:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:41:43 +0000   Mon, 18 Mar 2024 14:25:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:41:43 +0000   Mon, 18 Mar 2024 14:25:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:41:43 +0000   Mon, 18 Mar 2024 14:25:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:41:43 +0000   Mon, 18 Mar 2024 14:26:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.45
	  Hostname:    embed-certs-767719
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 13390a57d52543dbaf6fbe438b5b11b5
	  System UUID:                13390a57-d525-43db-af6f-be438b5b11b5
	  Boot ID:                    23ea0ecf-773f-457f-96a0-b747992c8e2e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4knv5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-fm52r                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-767719                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-767719             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-767719    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-f4547                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-767719             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-w8z6p               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-767719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-767719 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-767719 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node embed-certs-767719 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node embed-certs-767719 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-767719 event: Registered Node embed-certs-767719 in Controller
	
	
	==> dmesg <==
	[  +0.052942] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040960] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.576024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.893396] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.645190] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.411568] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.064737] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064767] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.225871] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.130563] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.266455] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +5.326384] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +0.064378] kauditd_printk_skb: 130 callbacks suppressed
	[Mar18 14:21] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +5.607731] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.787586] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 14:25] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.840373] systemd-fstab-generator[3399]: Ignoring "noauto" option for root device
	[  +4.756152] kauditd_printk_skb: 54 callbacks suppressed
	[Mar18 14:26] systemd-fstab-generator[3723]: Ignoring "noauto" option for root device
	[ +13.005742] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +0.089530] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 14:27] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [5c340aa58af706176d89edc63c324d8ab28c946bfcdb8fb227646400e547a4cc] <==
	{"level":"info","ts":"2024-03-18T14:25:57.960023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.960051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 received MsgVoteResp from a84d3445f2145a16 at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.960161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a84d3445f2145a16 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.960197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a84d3445f2145a16 elected leader a84d3445f2145a16 at term 2"}
	{"level":"info","ts":"2024-03-18T14:25:57.964915Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:25:57.967712Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a84d3445f2145a16","local-member-attributes":"{Name:embed-certs-767719 ClientURLs:[https://192.168.72.45:2379]}","request-path":"/0/members/a84d3445f2145a16/attributes","cluster-id":"45408efcc8fb3821","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:25:57.969456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:25:57.970689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.45:2379"}
	{"level":"info","ts":"2024-03-18T14:25:57.974496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:25:57.975604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T14:25:57.982457Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:25:57.982552Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:25:57.982809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45408efcc8fb3821","local-member-id":"a84d3445f2145a16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:25:57.982909Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:25:57.982937Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:35:58.40925Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-03-18T14:35:58.41176Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":669,"took":"2.099791ms","hash":4212240924}
	{"level":"info","ts":"2024-03-18T14:35:58.411842Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4212240924,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-03-18T14:40:58.418242Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":912}
	{"level":"info","ts":"2024-03-18T14:40:58.421148Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":912,"took":"2.124064ms","hash":2222364658}
	{"level":"info","ts":"2024-03-18T14:40:58.421276Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2222364658,"revision":912,"compact-revision":669}
	{"level":"info","ts":"2024-03-18T14:41:21.991243Z","caller":"traceutil/trace.go:171","msg":"trace[2144342304] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"102.267609ms","start":"2024-03-18T14:41:21.888925Z","end":"2024-03-18T14:41:21.991192Z","steps":["trace[2144342304] 'process raft request'  (duration: 102.034985ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:41:22.17787Z","caller":"traceutil/trace.go:171","msg":"trace[1827143279] linearizableReadLoop","detail":"{readStateIndex:1376; appliedIndex:1375; }","duration":"170.647971ms","start":"2024-03-18T14:41:22.007193Z","end":"2024-03-18T14:41:22.177841Z","steps":["trace[1827143279] 'read index received'  (duration: 101.596323ms)","trace[1827143279] 'applied index is now lower than readState.Index'  (duration: 69.050184ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T14:41:22.17835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.054506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:41:22.178706Z","caller":"traceutil/trace.go:171","msg":"trace[1740110266] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1175; }","duration":"171.51711ms","start":"2024-03-18T14:41:22.007167Z","end":"2024-03-18T14:41:22.178684Z","steps":["trace[1740110266] 'agreement among raft nodes before linearized reading'  (duration: 171.014793ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:41:57 up 21 min,  0 users,  load average: 0.15, 0.19, 0.19
	Linux embed-certs-767719 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3290b4ce5efac6bd753f0d9ab26d22a63ed9f1fbfe54ac78b0c66f3c4d0b9dfd] <==
	W0318 14:25:49.829714       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:49.910799       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.010863       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.209886       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.235635       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.306936       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.400778       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.430156       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.441319       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.548084       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.594705       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.604653       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.802992       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.899331       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:50.921985       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.057366       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.107554       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.138251       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.164040       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.189613       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.281240       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.281644       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.480711       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.619661       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 14:25:51.665695       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f21919201899501e0d5cb276865652fddb60e3a60b34d32ef961afbd51e92b13] <==
	E0318 14:37:01.196449       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:37:01.196484       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:38:00.090191       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:39:00.090562       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:39:01.195200       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:39:01.195618       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:39:01.195668       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:39:01.197516       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:39:01.197690       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:39:01.197726       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:40:00.091019       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:41:00.090593       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:41:00.197862       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:41:00.197990       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:41:00.198468       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:41:01.198724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:41:01.198896       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:41:01.198906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:41:01.198724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:41:01.199044       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:41:01.199962       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [43bf3db5a22cc83a41f70eebc56949fa622b976590c70f6c03450be3dfe6fb67] <==
	I0318 14:36:15.752098       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:36:45.280019       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:36:45.761271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:37:12.700220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="213.408µs"
	E0318 14:37:15.286271       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:37:15.771883       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:37:27.700817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="259.101µs"
	E0318 14:37:45.294351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:37:45.781594       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:38:15.301340       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:38:15.790282       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:38:45.308214       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:38:45.799553       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:39:15.314633       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:39:15.809012       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:39:45.320839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:39:45.824553       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:40:15.327011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:40:15.834342       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:40:45.333603       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:40:45.845889       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:41:15.341965       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:41:15.860596       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:41:45.349274       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:41:45.880813       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9668408c6e663a80a0cab513d18ea9f37971e1e01f4b7560833dca876b5ce93a] <==
	I0318 14:26:16.980114       1 server_others.go:69] "Using iptables proxy"
	I0318 14:26:17.204468       1 node.go:141] Successfully retrieved node IP: 192.168.72.45
	I0318 14:26:17.563658       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 14:26:17.563759       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 14:26:17.571611       1 server_others.go:152] "Using iptables Proxier"
	I0318 14:26:17.572335       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 14:26:17.573273       1 server.go:846] "Version info" version="v1.28.4"
	I0318 14:26:17.573478       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:26:17.575853       1 config.go:188] "Starting service config controller"
	I0318 14:26:17.576859       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 14:26:17.576937       1 config.go:97] "Starting endpoint slice config controller"
	I0318 14:26:17.576968       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 14:26:17.578883       1 config.go:315] "Starting node config controller"
	I0318 14:26:17.580783       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 14:26:17.677474       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 14:26:17.680945       1 shared_informer.go:318] Caches are synced for node config
	I0318 14:26:17.680975       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [a7a1b030bde32d78f1b15deb3645a01627c23ad954ab6837ec01c871d2fe3a9a] <==
	W0318 14:26:01.087721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:01.087788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:01.105867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 14:26:01.105929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 14:26:01.123367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 14:26:01.123482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 14:26:01.232802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:01.232848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:01.238454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 14:26:01.238521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 14:26:01.269497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 14:26:01.269546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 14:26:01.290370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 14:26:01.290524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 14:26:01.413472       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 14:26:01.413876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 14:26:01.432878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 14:26:01.433016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 14:26:01.455471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 14:26:01.455609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 14:26:01.532229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:01.532296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:01.652797       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 14:26:01.652850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 14:26:04.360603       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:39:03 embed-certs-767719 kubelet[3730]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:39:03 embed-certs-767719 kubelet[3730]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:39:14 embed-certs-767719 kubelet[3730]: E0318 14:39:14.684343    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:39:25 embed-certs-767719 kubelet[3730]: E0318 14:39:25.684022    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:39:38 embed-certs-767719 kubelet[3730]: E0318 14:39:38.684826    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:39:52 embed-certs-767719 kubelet[3730]: E0318 14:39:52.684522    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:40:03 embed-certs-767719 kubelet[3730]: E0318 14:40:03.738208    3730 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:40:03 embed-certs-767719 kubelet[3730]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:40:03 embed-certs-767719 kubelet[3730]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:40:03 embed-certs-767719 kubelet[3730]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:40:03 embed-certs-767719 kubelet[3730]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:40:05 embed-certs-767719 kubelet[3730]: E0318 14:40:05.683846    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:40:20 embed-certs-767719 kubelet[3730]: E0318 14:40:20.683874    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:40:34 embed-certs-767719 kubelet[3730]: E0318 14:40:34.683182    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:40:47 embed-certs-767719 kubelet[3730]: E0318 14:40:47.684374    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:40:59 embed-certs-767719 kubelet[3730]: E0318 14:40:59.684307    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:41:03 embed-certs-767719 kubelet[3730]: E0318 14:41:03.746668    3730 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:41:03 embed-certs-767719 kubelet[3730]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:41:03 embed-certs-767719 kubelet[3730]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:41:03 embed-certs-767719 kubelet[3730]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:41:03 embed-certs-767719 kubelet[3730]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:41:10 embed-certs-767719 kubelet[3730]: E0318 14:41:10.683946    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:41:24 embed-certs-767719 kubelet[3730]: E0318 14:41:24.683806    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:41:36 embed-certs-767719 kubelet[3730]: E0318 14:41:36.684578    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	Mar 18 14:41:47 embed-certs-767719 kubelet[3730]: E0318 14:41:47.683680    3730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-w8z6p" podUID="e4621ef8-7807-48ba-a57c-d5804dbfb784"
	
	
	==> storage-provisioner [1344f16b5555a97002ed52a6c3834e9966e397fd09e5fdf2c36bcc9de9f6ee07] <==
	I0318 14:26:19.241097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 14:26:19.251946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 14:26:19.252203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 14:26:19.271067       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 14:26:19.271456       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-767719_1b608b2e-b75b-4b86-a809-9846c8e1406b!
	I0318 14:26:19.272811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f704de54-9d45-4216-9e7b-770f62932150", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-767719_1b608b2e-b75b-4b86-a809-9846c8e1406b became leader
	I0318 14:26:19.371662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-767719_1b608b2e-b75b-4b86-a809-9846c8e1406b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767719 -n embed-certs-767719
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-767719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-w8z6p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-767719 describe pod metrics-server-57f55c9bc5-w8z6p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-767719 describe pod metrics-server-57f55c9bc5-w8z6p: exit status 1 (77.264762ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-w8z6p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-767719 describe pod metrics-server-57f55c9bc5-w8z6p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (393.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (373.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:41:57.388080219 +0000 UTC m=+7035.181539422
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-075922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.997µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-075922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-075922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-075922 logs -n 25: (1.483360758s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:40 UTC | 18 Mar 24 14:40 UTC |
	| start   | -p newest-cni-997491 --memory=2200 --alsologtostderr   | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:40 UTC | 18 Mar 24 14:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC | 18 Mar 24 14:41 UTC |
	| addons  | enable metrics-server -p newest-cni-997491             | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC | 18 Mar 24 14:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-997491                                   | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC | 18 Mar 24 14:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-997491                  | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC | 18 Mar 24 14:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-997491 --memory=2200 --alsologtostderr   | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:41 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:41:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:41:57.579555 1135329 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:41:57.579708 1135329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:41:57.579721 1135329 out.go:304] Setting ErrFile to fd 2...
	I0318 14:41:57.579726 1135329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:41:57.579984 1135329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:41:57.580609 1135329 out.go:298] Setting JSON to false
	I0318 14:41:57.582392 1135329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":23065,"bootTime":1710749853,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:41:57.582646 1135329 start.go:139] virtualization: kvm guest
	I0318 14:41:57.585395 1135329 out.go:177] * [newest-cni-997491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:41:57.586735 1135329 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:41:57.586792 1135329 notify.go:220] Checking for updates...
	I0318 14:41:57.588118 1135329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:41:57.589529 1135329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:41:57.591007 1135329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:41:57.592473 1135329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:41:57.593794 1135329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:41:57.595743 1135329 config.go:182] Loaded profile config "newest-cni-997491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:41:57.596321 1135329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:57.596399 1135329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:57.613577 1135329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0318 14:41:57.614025 1135329 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:57.614690 1135329 main.go:141] libmachine: Using API Version  1
	I0318 14:41:57.614715 1135329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:57.615097 1135329 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:57.615316 1135329 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:57.615672 1135329 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:41:57.616109 1135329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:57.616150 1135329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:57.637073 1135329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37333
	I0318 14:41:57.637667 1135329 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:57.638334 1135329 main.go:141] libmachine: Using API Version  1
	I0318 14:41:57.638396 1135329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:57.638839 1135329 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:57.639071 1135329 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:57.679298 1135329 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:41:57.680483 1135329 start.go:297] selected driver: kvm2
	I0318 14:41:57.680514 1135329 start.go:901] validating driver "kvm2" against &{Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:41:57.680652 1135329 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:41:57.681431 1135329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:41:57.681519 1135329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:41:57.698475 1135329 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:41:57.698901 1135329 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:41:57.698983 1135329 cni.go:84] Creating CNI manager for ""
	I0318 14:41:57.699004 1135329 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:41:57.699055 1135329 start.go:340] cluster config:
	{Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:41:57.699207 1135329 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:41:57.701080 1135329 out.go:177] * Starting "newest-cni-997491" primary control-plane node in "newest-cni-997491" cluster
	I0318 14:41:57.702499 1135329 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:41:57.702543 1135329 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 14:41:57.702562 1135329 cache.go:56] Caching tarball of preloaded images
	I0318 14:41:57.702665 1135329 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:41:57.702681 1135329 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 14:41:57.702862 1135329 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/config.json ...
	I0318 14:41:57.703063 1135329 start.go:360] acquireMachinesLock for newest-cni-997491: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:41:57.703112 1135329 start.go:364] duration metric: took 27.944µs to acquireMachinesLock for "newest-cni-997491"
	I0318 14:41:57.703130 1135329 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:41:57.703141 1135329 fix.go:54] fixHost starting: 
	I0318 14:41:57.703473 1135329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:41:57.703513 1135329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:41:57.723851 1135329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0318 14:41:57.724342 1135329 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:41:57.724847 1135329 main.go:141] libmachine: Using API Version  1
	I0318 14:41:57.724873 1135329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:41:57.725218 1135329 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:41:57.725398 1135329 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:41:57.725525 1135329 main.go:141] libmachine: (newest-cni-997491) Calling .GetState
	I0318 14:41:57.727356 1135329 fix.go:112] recreateIfNeeded on newest-cni-997491: state=Stopped err=<nil>
	I0318 14:41:57.727389 1135329 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	W0318 14:41:57.730601 1135329 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:41:57.733806 1135329 out.go:177] * Restarting existing kvm2 VM for "newest-cni-997491" ...
	
	
	==> CRI-O <==
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.245870141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecef5192-facf-4103-a047-cabd3791728f name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.247244197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dff3bc6-a1d9-4de7-a1ff-d203a820bffa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.247846090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772918247784403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dff3bc6-a1d9-4de7-a1ff-d203a820bffa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.248604005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c992c70-b389-47bb-8b00-7a2ca292b909 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.248799770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c992c70-b389-47bb-8b00-7a2ca292b909 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.249815753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,PodSandboxId:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771998615366467,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,PodSandboxId:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998504590794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,PodSandboxId:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998436951863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,PodSandboxId:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,
CreatedAt:1710771996669309971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,PodSandboxId:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771977348959285
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,PodSandboxId:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1cc63afee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771977343426375,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,PodSandboxId:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107719772700
79264,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,PodSandboxId:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:17107719
77165175994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c992c70-b389-47bb-8b00-7a2ca292b909 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.299149792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89a4503f-867a-4823-8490-1f4f0569df5b name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.299248503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89a4503f-867a-4823-8490-1f4f0569df5b name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.300457047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af150f8c-de81-412e-aef7-3596fd12f49a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.301019450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772918300991154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af150f8c-de81-412e-aef7-3596fd12f49a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.301823928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d505cb42-4c8c-4fac-8b91-2f04f5bd0847 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.301924349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d505cb42-4c8c-4fac-8b91-2f04f5bd0847 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.302095235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,PodSandboxId:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771998615366467,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,PodSandboxId:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998504590794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,PodSandboxId:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998436951863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,PodSandboxId:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,
CreatedAt:1710771996669309971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,PodSandboxId:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771977348959285
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,PodSandboxId:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1cc63afee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771977343426375,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,PodSandboxId:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107719772700
79264,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,PodSandboxId:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:17107719
77165175994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d505cb42-4c8c-4fac-8b91-2f04f5bd0847 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.331593612Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=737451c5-86b6-422f-9a9c-d7b9becfa096 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.331964560Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a8954270-a7e4-4584-860f-eea1ffd428c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771998365310678,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T14:26:38.040140360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb07b345520e589a7af219fd9b0e555bfd2b268e54093ec65121977e921a0d5b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-7c444,Uid:a04f0648-aa96-4119-b6e8-b981ac4e054f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771998185333741,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-7c444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04f0648-aa96-4119-b6e8-b
981ac4e054f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:26:37.865280223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-c8q9g,Uid:207d4899-9bf3-4f4b-ab21-bc35079a0bda,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771997960981985,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:26:36.138224493Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zqnfs,Uid:2603cb56
-7d34-4a9e-8614-9d4f4610da6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771997888009347,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:26:36.068993228Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&PodSandboxMetadata{Name:kube-proxy-bzwvf,Uid:f52bafde-a25e-4496-a987-42d88c036982,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771996321207086,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T14:26:36.002399165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-075922,Uid:1d4b77a9b7b4ea24d3851dcc48a94a25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771977068736576,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d4b77a9b7b4ea24d3851dcc48a94a25,kubernetes.io/config.seen: 2024-03-18T14:26:16.561203905Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1
cc63afee,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-075922,Uid:a6f5101ea08afdfc89ee317da149610b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771977040896800,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.39:8444,kubernetes.io/config.hash: a6f5101ea08afdfc89ee317da149610b,kubernetes.io/config.seen: 2024-03-18T14:26:16.561202370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-075922,Uid:25ac09be5cba30be6df0e359c114df82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1710771977030638386,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 25ac09be5cba30be6df0e359c114df82,kubernetes.io/config.seen: 2024-03-18T14:26:16.561204995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-075922,Uid:06ebc6541380aebc06e575576f810c42,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710771976998400707,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,tier: control-plane,},Annotati
ons:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.39:2379,kubernetes.io/config.hash: 06ebc6541380aebc06e575576f810c42,kubernetes.io/config.seen: 2024-03-18T14:26:16.561196634Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=737451c5-86b6-422f-9a9c-d7b9becfa096 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.333654043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eeb213ed-b5b9-4a3c-9ef5-0c5a2acb7616 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.333776799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eeb213ed-b5b9-4a3c-9ef5-0c5a2acb7616 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.333963423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,PodSandboxId:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771998615366467,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,PodSandboxId:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998504590794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,PodSandboxId:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998436951863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,PodSandboxId:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,
CreatedAt:1710771996669309971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,PodSandboxId:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771977348959285
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,PodSandboxId:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1cc63afee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771977343426375,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,PodSandboxId:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107719772700
79264,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,PodSandboxId:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:17107719
77165175994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eeb213ed-b5b9-4a3c-9ef5-0c5a2acb7616 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.358779228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a01de40-5f9b-49e5-bb48-d1823723786f name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.358883514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a01de40-5f9b-49e5-bb48-d1823723786f name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.361573034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b74b16f-5565-479d-acc0-fe32f3df0832 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.362144245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772918362115786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b74b16f-5565-479d-acc0-fe32f3df0832 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.362942612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fa214ab-e9e5-4481-9fa6-041b65ae489f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.363023519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fa214ab-e9e5-4481-9fa6-041b65ae489f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:58 default-k8s-diff-port-075922 crio[692]: time="2024-03-18 14:41:58.363308146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be,PodSandboxId:09b0dc6471521ff046ef51a3e04c66ed727903e3dbf3dc52e1811f81f7cbcbdd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710771998615366467,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8954270-a7e4-4584-860f-eea1ffd428c5,},Annotations:map[string]string{io.kubernetes.container.hash: d32f37fb,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d,PodSandboxId:dfcece09f20d56b974bc98e6c78cb281c12ccd38355e787fba31b45113df864b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998504590794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zqnfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2603cb56-7d34-4a9e-8614-9d4f4610da6d,},Annotations:map[string]string{io.kubernetes.container.hash: 95e18d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d,PodSandboxId:507494b76833c6d6657d7de62f81baee9c66ed98380b95de6d69de88e25d5ead,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710771998436951863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c8q9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 207d4899-9bf3-4f4b-ab21-bc35079a0bda,},Annotations:map[string]string{io.kubernetes.container.hash: 75fa6efe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4,PodSandboxId:8125ae53624d55518997da9202c44f7026e43a3680a50d5a5a47f2424b9d532c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,
CreatedAt:1710771996669309971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bzwvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52bafde-a25e-4496-a987-42d88c036982,},Annotations:map[string]string{io.kubernetes.container.hash: ed8fd302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4,PodSandboxId:772252e46b79d1197ab4e7b68a7c74350054576f14539b0b26229f8f9669f248,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710771977348959285
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25ac09be5cba30be6df0e359c114df82,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58,PodSandboxId:44bb254690a2d06096d00d7355fe5cb1509660af9e9144c81ab0b7f1cc63afee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710771977343426375,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f5101ea08afdfc89ee317da149610b,},Annotations:map[string]string{io.kubernetes.container.hash: b451bdfa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905,PodSandboxId:038299a9cef4ff6466cabda6a2a3fc9972e33f012ee4e1aea05c1c1664f2894b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107719772700
79264,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4b77a9b7b4ea24d3851dcc48a94a25,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709,PodSandboxId:a1a210c5eb4252cda2e953788792f52fb7380e853a0ff617fe961bfd0059d924,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:17107719
77165175994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-075922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ebc6541380aebc06e575576f810c42,},Annotations:map[string]string{io.kubernetes.container.hash: 4b668526,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fa214ab-e9e5-4481-9fa6-041b65ae489f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6530368ed5c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   09b0dc6471521       storage-provisioner
	f3afcb1dd7909       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   dfcece09f20d5       coredns-5dd5756b68-zqnfs
	8bd8173e8ddba       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   507494b76833c       coredns-5dd5756b68-c8q9g
	946876f232cf6       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   8125ae53624d5       kube-proxy-bzwvf
	15b995e68f898       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   772252e46b79d       kube-scheduler-default-k8s-diff-port-075922
	c9686e1e42595       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   44bb254690a2d       kube-apiserver-default-k8s-diff-port-075922
	c7a01556a0b32       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   038299a9cef4f       kube-controller-manager-default-k8s-diff-port-075922
	f1f41ad4a31ca       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   a1a210c5eb425       etcd-default-k8s-diff-port-075922
	
	
	==> coredns [8bd8173e8ddba28758d34b7e79cc6df0c0b8cdb9a98897184d7e4604310a691d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [f3afcb1dd7909b321281cf5a01f61655ce3d83a2a2fc62469c60e0a9f2deb99d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-075922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-075922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=default-k8s-diff-port-075922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:26:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-075922
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:36:56 +0000   Mon, 18 Mar 2024 14:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:36:56 +0000   Mon, 18 Mar 2024 14:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:36:56 +0000   Mon, 18 Mar 2024 14:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:36:56 +0000   Mon, 18 Mar 2024 14:26:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.39
	  Hostname:    default-k8s-diff-port-075922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f21705f1d164bb184935d88a5f9583f
	  System UUID:                7f21705f-1d16-4bb1-8493-5d88a5f9583f
	  Boot ID:                    5e71147a-1e7d-42e2-b1a4-98acc5584c15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-c8q9g                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-zqnfs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-075922                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-075922             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-075922    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-bzwvf                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-075922             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-7c444                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-075922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-075922 event: Registered Node default-k8s-diff-port-075922 in Controller
	
	
	==> dmesg <==
	[  +0.056526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043858] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar18 14:21] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.834930] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.653999] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.402718] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.065333] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086750] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.229015] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.164948] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.327886] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +5.530155] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.068051] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.084030] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +5.702241] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.492212] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 14:26] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.514385] systemd-fstab-generator[3385]: Ignoring "noauto" option for root device
	[  +4.818503] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.483800] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[ +12.880138] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[  +0.117558] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 14:27] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [f1f41ad4a31ca75bdbf97b201eff283130f82a87b67a1e81e8a3cae7ce149709] <==
	{"level":"info","ts":"2024-03-18T14:26:17.861431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dcb628089222db2 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T14:26:17.861501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dcb628089222db2 elected leader dcb628089222db2 at term 2"}
	{"level":"info","ts":"2024-03-18T14:26:17.866073Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dcb628089222db2","local-member-attributes":"{Name:default-k8s-diff-port-075922 ClientURLs:[https://192.168.83.39:2379]}","request-path":"/0/members/dcb628089222db2/attributes","cluster-id":"ee08272957b13977","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:26:17.866394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:26:17.867776Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T14:26:17.869953Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:26:17.870836Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:26:17.870863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:26:17.888756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:26:17.896406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.39:2379"}
	{"level":"info","ts":"2024-03-18T14:26:17.897196Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ee08272957b13977","local-member-id":"dcb628089222db2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:26:17.900915Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:26:17.900972Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:36:18.31991Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-03-18T14:36:18.32543Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":710,"took":"4.447828ms","hash":1228059229}
	{"level":"info","ts":"2024-03-18T14:36:18.325533Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1228059229,"revision":710,"compact-revision":-1}
	{"level":"info","ts":"2024-03-18T14:41:18.327066Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":954}
	{"level":"info","ts":"2024-03-18T14:41:18.32942Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":954,"took":"1.427457ms","hash":1076798787}
	{"level":"info","ts":"2024-03-18T14:41:18.329792Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1076798787,"revision":954,"compact-revision":710}
	{"level":"info","ts":"2024-03-18T14:41:20.733301Z","caller":"traceutil/trace.go:171","msg":"trace[958889215] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"282.996284ms","start":"2024-03-18T14:41:20.450188Z","end":"2024-03-18T14:41:20.733185Z","steps":["trace[958889215] 'process raft request'  (duration: 282.850981ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:41:22.21089Z","caller":"traceutil/trace.go:171","msg":"trace[1326661136] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"193.586124ms","start":"2024-03-18T14:41:22.017272Z","end":"2024-03-18T14:41:22.210859Z","steps":["trace[1326661136] 'process raft request'  (duration: 120.993197ms)","trace[1326661136] 'compare'  (duration: 70.669465ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T14:41:22.210886Z","caller":"traceutil/trace.go:171","msg":"trace[1720912853] linearizableReadLoop","detail":"{readStateIndex:1397; appliedIndex:1395; }","duration":"151.712627ms","start":"2024-03-18T14:41:22.059026Z","end":"2024-03-18T14:41:22.210738Z","steps":["trace[1720912853] 'read index received'  (duration: 79.249198ms)","trace[1720912853] 'applied index is now lower than readState.Index'  (duration: 72.460916ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T14:41:22.2124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.236574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:41:22.212587Z","caller":"traceutil/trace.go:171","msg":"trace[2065417591] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1202; }","duration":"153.559087ms","start":"2024-03-18T14:41:22.059Z","end":"2024-03-18T14:41:22.212559Z","steps":["trace[2065417591] 'agreement among raft nodes before linearized reading'  (duration: 151.903158ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:41:22.213651Z","caller":"traceutil/trace.go:171","msg":"trace[1517935858] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"192.447755ms","start":"2024-03-18T14:41:22.021183Z","end":"2024-03-18T14:41:22.213631Z","steps":["trace[1517935858] 'process raft request'  (duration: 188.871024ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:41:58 up 21 min,  0 users,  load average: 0.26, 0.24, 0.26
	Linux default-k8s-diff-port-075922 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9686e1e42595f2d463c25afb530ae72b29c52df7fba35353127bd2642c1de58] <==
	W0318 14:37:21.184351       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:37:21.184454       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:37:21.184490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:38:20.104470       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:39:20.105134       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:39:21.183420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:39:21.183831       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:39:21.183911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:39:21.185565       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:39:21.185650       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:39:21.185782       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:40:20.104647       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:41:20.104508       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:41:20.187574       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:41:20.187789       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:41:20.188192       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:41:21.188778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:41:21.188855       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:41:21.188906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:41:21.189088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:41:21.189286       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:41:21.190529       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c7a01556a0b322172234d832dbfbae2f30d1710f7ef44151f4d20f3f63028905] <==
	I0318 14:36:05.767892       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:36:35.189841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:36:35.776363       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:37:05.196582       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:37:05.785772       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:37:33.509132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="235.814µs"
	E0318 14:37:35.202992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:37:35.794311       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:37:46.503482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="166.036µs"
	E0318 14:38:05.210020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:38:05.804936       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:38:35.217968       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:38:35.812919       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:39:05.223928       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:39:05.824163       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:39:35.229878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:39:35.834156       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:40:05.235815       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:40:05.844376       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:40:35.242226       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:40:35.855213       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:41:05.249955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:41:05.865828       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:41:35.267006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:41:35.874628       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [946876f232cf62e7167ded808256cb1b56bf060b281f4cadc2b1e458b1d104d4] <==
	I0318 14:26:36.849843       1 server_others.go:69] "Using iptables proxy"
	I0318 14:26:36.882879       1 node.go:141] Successfully retrieved node IP: 192.168.83.39
	I0318 14:26:37.009449       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 14:26:37.009512       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 14:26:37.023032       1 server_others.go:152] "Using iptables Proxier"
	I0318 14:26:37.023108       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 14:26:37.023289       1 server.go:846] "Version info" version="v1.28.4"
	I0318 14:26:37.023322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:26:37.025099       1 config.go:188] "Starting service config controller"
	I0318 14:26:37.025138       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 14:26:37.025181       1 config.go:97] "Starting endpoint slice config controller"
	I0318 14:26:37.025185       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 14:26:37.025621       1 config.go:315] "Starting node config controller"
	I0318 14:26:37.025656       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 14:26:37.125852       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 14:26:37.125918       1 shared_informer.go:318] Caches are synced for service config
	I0318 14:26:37.143315       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [15b995e68f898a1a1ea4cb4e8bf33f0df409736f666941e05b3b1f1b0f78a2f4] <==
	W0318 14:26:20.199880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 14:26:20.200908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 14:26:20.199959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:20.200973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:21.064106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 14:26:21.064245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 14:26:21.073175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 14:26:21.073365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 14:26:21.082271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:21.082370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:21.176763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 14:26:21.176870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 14:26:21.281957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:21.282181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:26:21.325777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 14:26:21.325899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 14:26:21.331486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 14:26:21.331620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 14:26:21.462627       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 14:26:21.462773       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 14:26:21.484191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 14:26:21.484242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 14:26:21.505249       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 14:26:21.505356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 14:26:23.785360       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:39:23 default-k8s-diff-port-075922 kubelet[3712]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:39:23 default-k8s-diff-port-075922 kubelet[3712]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:39:23 default-k8s-diff-port-075922 kubelet[3712]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:39:26 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:39:26.488821    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:39:38 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:39:38.489157    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:39:53 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:39:53.489808    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:40:04 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:40:04.489067    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:40:18 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:40:18.489441    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:40:23 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:40:23.535039    3712 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:40:23 default-k8s-diff-port-075922 kubelet[3712]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:40:23 default-k8s-diff-port-075922 kubelet[3712]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:40:23 default-k8s-diff-port-075922 kubelet[3712]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:40:23 default-k8s-diff-port-075922 kubelet[3712]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:40:29 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:40:29.490001    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:40:41 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:40:41.490340    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:40:56 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:40:56.490046    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:41:08 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:41:08.488637    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:41:23 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:41:23.490267    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:41:23 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:41:23.535546    3712 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:41:23 default-k8s-diff-port-075922 kubelet[3712]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:41:23 default-k8s-diff-port-075922 kubelet[3712]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:41:23 default-k8s-diff-port-075922 kubelet[3712]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:41:23 default-k8s-diff-port-075922 kubelet[3712]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:41:38 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:41:38.489933    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	Mar 18 14:41:51 default-k8s-diff-port-075922 kubelet[3712]: E0318 14:41:51.491010    3712 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-7c444" podUID="a04f0648-aa96-4119-b6e8-b981ac4e054f"
	
	
	==> storage-provisioner [c6530368ed5c19d50d14c500565b1329228ea8efa8dc4e08f1e8da327ce5d5be] <==
	I0318 14:26:38.862392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 14:26:38.877508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 14:26:38.878744       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 14:26:38.894521       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 14:26:38.894931       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-075922_c6196403-3a40-4c95-9e9d-4699fb79f97c!
	I0318 14:26:38.895626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd295b4d-305d-4153-b3e2-0b829e6989d6", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-075922_c6196403-3a40-4c95-9e9d-4699fb79f97c became leader
	I0318 14:26:38.996837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-075922_c6196403-3a40-4c95-9e9d-4699fb79f97c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-7c444
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 describe pod metrics-server-57f55c9bc5-7c444
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-075922 describe pod metrics-server-57f55c9bc5-7c444: exit status 1 (75.468557ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-7c444" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-075922 describe pod metrics-server-57f55c9bc5-7c444: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (373.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (263.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188109 -n no-preload-188109
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:41:02.875075143 +0000 UTC m=+6980.668534346
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-188109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-188109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.528µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-188109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-188109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-188109 logs -n 25: (1.36517111s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:40 UTC | 18 Mar 24 14:40 UTC |
	| start   | -p newest-cni-997491 --memory=2200 --alsologtostderr   | newest-cni-997491            | jenkins | v1.32.0 | 18 Mar 24 14:40 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:40:47
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:40:47.860233 1134500 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:40:47.860530 1134500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:40:47.860541 1134500 out.go:304] Setting ErrFile to fd 2...
	I0318 14:40:47.860548 1134500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:40:47.860766 1134500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:40:47.861449 1134500 out.go:298] Setting JSON to false
	I0318 14:40:47.862819 1134500 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":22995,"bootTime":1710749853,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:40:47.862890 1134500 start.go:139] virtualization: kvm guest
	I0318 14:40:47.865457 1134500 out.go:177] * [newest-cni-997491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:40:47.866908 1134500 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:40:47.866962 1134500 notify.go:220] Checking for updates...
	I0318 14:40:47.868625 1134500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:40:47.870263 1134500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:40:47.871798 1134500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:40:47.873186 1134500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:40:47.874811 1134500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:40:47.876948 1134500 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:40:47.877090 1134500 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:40:47.877270 1134500 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:40:47.877465 1134500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:40:47.917506 1134500 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:40:47.919187 1134500 start.go:297] selected driver: kvm2
	I0318 14:40:47.919220 1134500 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:40:47.919235 1134500 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:40:47.920155 1134500 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:40:47.920261 1134500 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:40:47.936759 1134500 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:40:47.936817 1134500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 14:40:47.936867 1134500 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 14:40:47.937113 1134500 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:40:47.937180 1134500 cni.go:84] Creating CNI manager for ""
	I0318 14:40:47.937194 1134500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:40:47.937201 1134500 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 14:40:47.937251 1134500 start.go:340] cluster config:
	{Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:40:47.937363 1134500 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:40:47.939692 1134500 out.go:177] * Starting "newest-cni-997491" primary control-plane node in "newest-cni-997491" cluster
	I0318 14:40:47.941013 1134500 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:40:47.941058 1134500 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 14:40:47.941066 1134500 cache.go:56] Caching tarball of preloaded images
	I0318 14:40:47.941187 1134500 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:40:47.941202 1134500 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 14:40:47.941306 1134500 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/config.json ...
	I0318 14:40:47.941333 1134500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/newest-cni-997491/config.json: {Name:mk53a5078ce8cd00824bc119cfc6d3c1fd475011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:40:47.941574 1134500 start.go:360] acquireMachinesLock for newest-cni-997491: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:40:47.941620 1134500 start.go:364] duration metric: took 21.846µs to acquireMachinesLock for "newest-cni-997491"
	I0318 14:40:47.941648 1134500 start.go:93] Provisioning new machine with config: &{Name:newest-cni-997491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-997491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:40:47.941744 1134500 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 14:40:47.943644 1134500 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 14:40:47.943860 1134500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:40:47.943909 1134500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:40:47.961532 1134500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
	I0318 14:40:47.962023 1134500 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:40:47.962679 1134500 main.go:141] libmachine: Using API Version  1
	I0318 14:40:47.962704 1134500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:40:47.963193 1134500 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:40:47.963417 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetMachineName
	I0318 14:40:47.963619 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .DriverName
	I0318 14:40:47.963800 1134500 start.go:159] libmachine.API.Create for "newest-cni-997491" (driver="kvm2")
	I0318 14:40:47.963868 1134500 client.go:168] LocalClient.Create starting
	I0318 14:40:47.963955 1134500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem
	I0318 14:40:47.964017 1134500 main.go:141] libmachine: Decoding PEM data...
	I0318 14:40:47.964048 1134500 main.go:141] libmachine: Parsing certificate...
	I0318 14:40:47.964141 1134500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem
	I0318 14:40:47.964170 1134500 main.go:141] libmachine: Decoding PEM data...
	I0318 14:40:47.964187 1134500 main.go:141] libmachine: Parsing certificate...
	I0318 14:40:47.964215 1134500 main.go:141] libmachine: Running pre-create checks...
	I0318 14:40:47.964235 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .PreCreateCheck
	I0318 14:40:47.964689 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .GetConfigRaw
	I0318 14:40:47.965194 1134500 main.go:141] libmachine: Creating machine...
	I0318 14:40:47.965211 1134500 main.go:141] libmachine: (newest-cni-997491) Calling .Create
	I0318 14:40:47.965374 1134500 main.go:141] libmachine: (newest-cni-997491) Creating KVM machine...
	I0318 14:40:47.966812 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | found existing default KVM network
	I0318 14:40:47.968337 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:47.968143 1134524 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:56:14:d4} reservation:<nil>}
	I0318 14:40:47.969865 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:47.969769 1134524 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002de7f0}
	I0318 14:40:47.969892 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | created network xml: 
	I0318 14:40:47.969902 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | <network>
	I0318 14:40:47.969911 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   <name>mk-newest-cni-997491</name>
	I0318 14:40:47.969920 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   <dns enable='no'/>
	I0318 14:40:47.969926 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   
	I0318 14:40:47.969937 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0318 14:40:47.969948 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |     <dhcp>
	I0318 14:40:47.969960 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0318 14:40:47.969976 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |     </dhcp>
	I0318 14:40:47.969989 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   </ip>
	I0318 14:40:47.969998 1134500 main.go:141] libmachine: (newest-cni-997491) DBG |   
	I0318 14:40:47.970006 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | </network>
	I0318 14:40:47.970026 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | 
	I0318 14:40:47.975703 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | trying to create private KVM network mk-newest-cni-997491 192.168.50.0/24...
	I0318 14:40:48.053455 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | private KVM network mk-newest-cni-997491 192.168.50.0/24 created
	I0318 14:40:48.053515 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.053400 1134524 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:40:48.053529 1134500 main.go:141] libmachine: (newest-cni-997491) Setting up store path in /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491 ...
	I0318 14:40:48.053553 1134500 main.go:141] libmachine: (newest-cni-997491) Building disk image from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:40:48.053567 1134500 main.go:141] libmachine: (newest-cni-997491) Downloading /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:40:48.325318 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.325178 1134524 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/id_rsa...
	I0318 14:40:48.519429 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.519283 1134524 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/newest-cni-997491.rawdisk...
	I0318 14:40:48.519466 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Writing magic tar header
	I0318 14:40:48.519507 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Writing SSH key tar header
	I0318 14:40:48.519520 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:48.519456 1134524 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491 ...
	I0318 14:40:48.519650 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491
	I0318 14:40:48.519700 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491 (perms=drwx------)
	I0318 14:40:48.519717 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines
	I0318 14:40:48.519733 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:40:48.519743 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18427-1067917
	I0318 14:40:48.519751 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:40:48.519759 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:40:48.519766 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Checking permissions on dir: /home
	I0318 14:40:48.519776 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | Skipping /home - not owner
	I0318 14:40:48.519811 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:40:48.519883 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917/.minikube (perms=drwxr-xr-x)
	I0318 14:40:48.519900 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration/18427-1067917 (perms=drwxrwxr-x)
	I0318 14:40:48.519914 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:40:48.519932 1134500 main.go:141] libmachine: (newest-cni-997491) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:40:48.519943 1134500 main.go:141] libmachine: (newest-cni-997491) Creating domain...
	I0318 14:40:48.521247 1134500 main.go:141] libmachine: (newest-cni-997491) define libvirt domain using xml: 
	I0318 14:40:48.521270 1134500 main.go:141] libmachine: (newest-cni-997491) <domain type='kvm'>
	I0318 14:40:48.521277 1134500 main.go:141] libmachine: (newest-cni-997491)   <name>newest-cni-997491</name>
	I0318 14:40:48.521286 1134500 main.go:141] libmachine: (newest-cni-997491)   <memory unit='MiB'>2200</memory>
	I0318 14:40:48.521295 1134500 main.go:141] libmachine: (newest-cni-997491)   <vcpu>2</vcpu>
	I0318 14:40:48.521302 1134500 main.go:141] libmachine: (newest-cni-997491)   <features>
	I0318 14:40:48.521313 1134500 main.go:141] libmachine: (newest-cni-997491)     <acpi/>
	I0318 14:40:48.521324 1134500 main.go:141] libmachine: (newest-cni-997491)     <apic/>
	I0318 14:40:48.521334 1134500 main.go:141] libmachine: (newest-cni-997491)     <pae/>
	I0318 14:40:48.521344 1134500 main.go:141] libmachine: (newest-cni-997491)     
	I0318 14:40:48.521353 1134500 main.go:141] libmachine: (newest-cni-997491)   </features>
	I0318 14:40:48.521366 1134500 main.go:141] libmachine: (newest-cni-997491)   <cpu mode='host-passthrough'>
	I0318 14:40:48.521401 1134500 main.go:141] libmachine: (newest-cni-997491)   
	I0318 14:40:48.521434 1134500 main.go:141] libmachine: (newest-cni-997491)   </cpu>
	I0318 14:40:48.521444 1134500 main.go:141] libmachine: (newest-cni-997491)   <os>
	I0318 14:40:48.521459 1134500 main.go:141] libmachine: (newest-cni-997491)     <type>hvm</type>
	I0318 14:40:48.521470 1134500 main.go:141] libmachine: (newest-cni-997491)     <boot dev='cdrom'/>
	I0318 14:40:48.521475 1134500 main.go:141] libmachine: (newest-cni-997491)     <boot dev='hd'/>
	I0318 14:40:48.521483 1134500 main.go:141] libmachine: (newest-cni-997491)     <bootmenu enable='no'/>
	I0318 14:40:48.521493 1134500 main.go:141] libmachine: (newest-cni-997491)   </os>
	I0318 14:40:48.521501 1134500 main.go:141] libmachine: (newest-cni-997491)   <devices>
	I0318 14:40:48.521511 1134500 main.go:141] libmachine: (newest-cni-997491)     <disk type='file' device='cdrom'>
	I0318 14:40:48.521527 1134500 main.go:141] libmachine: (newest-cni-997491)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/boot2docker.iso'/>
	I0318 14:40:48.521543 1134500 main.go:141] libmachine: (newest-cni-997491)       <target dev='hdc' bus='scsi'/>
	I0318 14:40:48.521554 1134500 main.go:141] libmachine: (newest-cni-997491)       <readonly/>
	I0318 14:40:48.521583 1134500 main.go:141] libmachine: (newest-cni-997491)     </disk>
	I0318 14:40:48.521597 1134500 main.go:141] libmachine: (newest-cni-997491)     <disk type='file' device='disk'>
	I0318 14:40:48.521609 1134500 main.go:141] libmachine: (newest-cni-997491)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:40:48.521643 1134500 main.go:141] libmachine: (newest-cni-997491)       <source file='/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/newest-cni-997491/newest-cni-997491.rawdisk'/>
	I0318 14:40:48.521667 1134500 main.go:141] libmachine: (newest-cni-997491)       <target dev='hda' bus='virtio'/>
	I0318 14:40:48.521687 1134500 main.go:141] libmachine: (newest-cni-997491)     </disk>
	I0318 14:40:48.521706 1134500 main.go:141] libmachine: (newest-cni-997491)     <interface type='network'>
	I0318 14:40:48.521719 1134500 main.go:141] libmachine: (newest-cni-997491)       <source network='mk-newest-cni-997491'/>
	I0318 14:40:48.521730 1134500 main.go:141] libmachine: (newest-cni-997491)       <model type='virtio'/>
	I0318 14:40:48.521740 1134500 main.go:141] libmachine: (newest-cni-997491)     </interface>
	I0318 14:40:48.521753 1134500 main.go:141] libmachine: (newest-cni-997491)     <interface type='network'>
	I0318 14:40:48.521769 1134500 main.go:141] libmachine: (newest-cni-997491)       <source network='default'/>
	I0318 14:40:48.521781 1134500 main.go:141] libmachine: (newest-cni-997491)       <model type='virtio'/>
	I0318 14:40:48.521796 1134500 main.go:141] libmachine: (newest-cni-997491)     </interface>
	I0318 14:40:48.521806 1134500 main.go:141] libmachine: (newest-cni-997491)     <serial type='pty'>
	I0318 14:40:48.521817 1134500 main.go:141] libmachine: (newest-cni-997491)       <target port='0'/>
	I0318 14:40:48.521827 1134500 main.go:141] libmachine: (newest-cni-997491)     </serial>
	I0318 14:40:48.521835 1134500 main.go:141] libmachine: (newest-cni-997491)     <console type='pty'>
	I0318 14:40:48.521844 1134500 main.go:141] libmachine: (newest-cni-997491)       <target type='serial' port='0'/>
	I0318 14:40:48.521863 1134500 main.go:141] libmachine: (newest-cni-997491)     </console>
	I0318 14:40:48.521874 1134500 main.go:141] libmachine: (newest-cni-997491)     <rng model='virtio'>
	I0318 14:40:48.521888 1134500 main.go:141] libmachine: (newest-cni-997491)       <backend model='random'>/dev/random</backend>
	I0318 14:40:48.521902 1134500 main.go:141] libmachine: (newest-cni-997491)     </rng>
	I0318 14:40:48.521913 1134500 main.go:141] libmachine: (newest-cni-997491)     
	I0318 14:40:48.521920 1134500 main.go:141] libmachine: (newest-cni-997491)     
	I0318 14:40:48.521933 1134500 main.go:141] libmachine: (newest-cni-997491)   </devices>
	I0318 14:40:48.521942 1134500 main.go:141] libmachine: (newest-cni-997491) </domain>
	I0318 14:40:48.521951 1134500 main.go:141] libmachine: (newest-cni-997491) 
	I0318 14:40:48.526550 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:eb:3f:47 in network default
	I0318 14:40:48.527182 1134500 main.go:141] libmachine: (newest-cni-997491) Ensuring networks are active...
	I0318 14:40:48.527206 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:48.528002 1134500 main.go:141] libmachine: (newest-cni-997491) Ensuring network default is active
	I0318 14:40:48.528396 1134500 main.go:141] libmachine: (newest-cni-997491) Ensuring network mk-newest-cni-997491 is active
	I0318 14:40:48.529049 1134500 main.go:141] libmachine: (newest-cni-997491) Getting domain xml...
	I0318 14:40:48.529912 1134500 main.go:141] libmachine: (newest-cni-997491) Creating domain...
	I0318 14:40:49.800134 1134500 main.go:141] libmachine: (newest-cni-997491) Waiting to get IP...
	I0318 14:40:49.800899 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:49.801330 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:49.801388 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:49.801321 1134524 retry.go:31] will retry after 215.972164ms: waiting for machine to come up
	I0318 14:40:50.019019 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:50.019615 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:50.019650 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:50.019556 1134524 retry.go:31] will retry after 302.703358ms: waiting for machine to come up
	I0318 14:40:50.324237 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:50.324788 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:50.324820 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:50.324717 1134524 retry.go:31] will retry after 424.444672ms: waiting for machine to come up
	I0318 14:40:50.750250 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:50.750833 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:50.750868 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:50.750787 1134524 retry.go:31] will retry after 550.56941ms: waiting for machine to come up
	I0318 14:40:51.302390 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:51.302856 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:51.302880 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:51.302799 1134524 retry.go:31] will retry after 472.696783ms: waiting for machine to come up
	I0318 14:40:51.777568 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:51.777993 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:51.778024 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:51.777945 1134524 retry.go:31] will retry after 949.389477ms: waiting for machine to come up
	I0318 14:40:52.728902 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:52.729381 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:52.729414 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:52.729314 1134524 retry.go:31] will retry after 1.029751384s: waiting for machine to come up
	I0318 14:40:53.760875 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:53.761368 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:53.761395 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:53.761322 1134524 retry.go:31] will retry after 1.197480841s: waiting for machine to come up
	I0318 14:40:54.960787 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:54.961279 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:54.961311 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:54.961231 1134524 retry.go:31] will retry after 1.575956051s: waiting for machine to come up
	I0318 14:40:56.538939 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:56.539394 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:56.539424 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:56.539333 1134524 retry.go:31] will retry after 1.553381087s: waiting for machine to come up
	I0318 14:40:58.095145 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:58.095630 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:58.095664 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:58.095569 1134524 retry.go:31] will retry after 1.779999121s: waiting for machine to come up
	I0318 14:40:59.877035 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:40:59.877575 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:40:59.877609 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:40:59.877519 1134524 retry.go:31] will retry after 2.375135175s: waiting for machine to come up
	I0318 14:41:02.254060 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | domain newest-cni-997491 has defined MAC address 52:54:00:f9:f7:0a in network mk-newest-cni-997491
	I0318 14:41:02.254541 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | unable to find current IP address of domain newest-cni-997491 in network mk-newest-cni-997491
	I0318 14:41:02.254578 1134500 main.go:141] libmachine: (newest-cni-997491) DBG | I0318 14:41:02.254475 1134524 retry.go:31] will retry after 3.82072828s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.590521084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772863590499783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bdb47d2-013c-4779-84a6-8304cde65c30 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.591087029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=811cfb91-d4f0-4508-a416-47f0a52a0229 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.591166942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=811cfb91-d4f0-4508-a416-47f0a52a0229 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.591402435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=811cfb91-d4f0-4508-a416-47f0a52a0229 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.635344253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d01985d9-51f9-45be-b9f2-500a619d738e name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.635452489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d01985d9-51f9-45be-b9f2-500a619d738e name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.637571731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71d38fab-8f24-4f15-81cb-4fab13d95939 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.638072639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772863638041689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71d38fab-8f24-4f15-81cb-4fab13d95939 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.638639950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fec2347-3679-4287-80d4-0d7f2df985d3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.638774685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fec2347-3679-4287-80d4-0d7f2df985d3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.639054834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fec2347-3679-4287-80d4-0d7f2df985d3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.689896425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78df83f3-ded3-4fd5-93a6-6533847ba831 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.689989112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78df83f3-ded3-4fd5-93a6-6533847ba831 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.692146856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=678d82f2-0314-4d63-8628-8c8a9b135954 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.694156415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772863694127453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=678d82f2-0314-4d63-8628-8c8a9b135954 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.699151457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66e588f2-e473-4d2e-8f6a-c28c7ca44b23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.699350575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66e588f2-e473-4d2e-8f6a-c28c7ca44b23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.699752632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66e588f2-e473-4d2e-8f6a-c28c7ca44b23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.745794936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f47fe17-5f2e-4efb-8586-40284834c924 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.745902807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f47fe17-5f2e-4efb-8586-40284834c924 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.747689071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a400e1e-7624-4101-8d62-fb5804f42575 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.748253308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772863748222789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a400e1e-7624-4101-8d62-fb5804f42575 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.749231492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1f05e27-3fdf-43ce-ab23-c8e00c41331e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.749310361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1f05e27-3fdf-43ce-ab23-c8e00c41331e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:41:03 no-preload-188109 crio[695]: time="2024-03-18 14:41:03.749587578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b,PodSandboxId:9c48a3a8499e2613f106caedd0a693fbd9d6ccae8a979fdb108fef4ab85b7bf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710772056081244408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b,},Annotations:map[string]string{io.kubernetes.container.hash: 75097ff0,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59,PodSandboxId:bb476a5de03790a8866b910f6e38572a79ae0f678465a4ed2928799c87868f07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054893062850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-jk9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ff991e-2c6b-49ad-bc69-c427d1f24610,},Annotations:map[string]string{io.kubernetes.container.hash: 41c999b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d,PodSandboxId:9091510b0061b52e7e4c020f864099432ac7be324da92171eccf94f67235f48c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710772054839418719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xczpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0
9adcb8-dacb-4b1c-bbbf-9f056e89da3b,},Annotations:map[string]string{io.kubernetes.container.hash: 721a4bb4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6,PodSandboxId:1f70fb8e41ab06109051792e89055742a6e88847d98dfc37681d373a9b92d7e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710772054617189991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a139949c-570d-438a-955a-03768aabf027,},Annotations:map[string]string{io.kubernetes.container.hash: e8ecf6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2,PodSandboxId:2c27f2d1cf2bf9889b4ec75afd46940145e430341dde2d71098ed0adde4ee8df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710772034942835620,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d27e2c5c45d3f09ac70ca190354fe58,},Annotations:map[string]string{io.kubernetes.container.hash: f570e013,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625,PodSandboxId:1a0cf825c74185e2f488d73af60ddb499b2bac88dcdcc3eb6592c0d56b62718f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710772034937420583,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e9cc0c1e86cd72954ecefe8bf52f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645,PodSandboxId:0f618e16932baa4a1ac4dabe4bbe847f579b685de0371d37da1ad1afb48070e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710772034824537926,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71403531e1e1e87ee7c418a4eff2891a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9,PodSandboxId:14ebf104ef7aca6d8cb4b3b69fd7fa78da1e18e5a9864546d8108acb3599261f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710772034792609603,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-188109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e6ae07d8f2d405043ef052c391a762,},Annotations:map[string]string{io.kubernetes.container.hash: adf3e194,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1f05e27-3fdf-43ce-ab23-c8e00c41331e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ed6e4fe42941b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   9c48a3a8499e2       storage-provisioner
	f1116147cc6e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   bb476a5de0379       coredns-76f75df574-jk9v5
	15a4759ba2307       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   9091510b0061b       coredns-76f75df574-xczpc
	7cdf1dc2f8458       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   1f70fb8e41ab0       kube-proxy-qpxx5
	1f877d2b9b5d6       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   2c27f2d1cf2bf       etcd-no-preload-188109
	cc83d2a0284ea       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   1a0cf825c7418       kube-scheduler-no-preload-188109
	281a4e15f2338       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   0f618e16932ba       kube-controller-manager-no-preload-188109
	ad416fceca513       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   14ebf104ef7ac       kube-apiserver-no-preload-188109
	
	
	==> coredns [15a4759ba2307687c5ab971f85f7a339165bb85e2012da3fc57cd29aa1f0935d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1116147cc6e3c0720232a34abd47ae4c0e50beea7079e046863267eb5a15b59] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-188109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-188109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=no-preload-188109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 14:27:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-188109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:40:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:37:52 +0000   Mon, 18 Mar 2024 14:27:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:37:52 +0000   Mon, 18 Mar 2024 14:27:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:37:52 +0000   Mon, 18 Mar 2024 14:27:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:37:52 +0000   Mon, 18 Mar 2024 14:27:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.40
	  Hostname:    no-preload-188109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1855d668dc4b229047ec8f42cf9f17
	  System UUID:                8a1855d6-68dc-4b22-9047-ec8f42cf9f17
	  Boot ID:                    d5473383-b39b-4bbe-b8c8-9a0dbd930d0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-jk9v5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-xczpc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-188109                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-188109             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-188109    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-qpxx5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-188109             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-9hjss              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-188109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-188109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-188109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-188109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-188109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-188109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-188109 event: Registered Node no-preload-188109 in Controller
	
	
	==> dmesg <==
	[  +0.055065] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043601] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.999535] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.918347] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.429258] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.449600] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.059325] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077549] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.204870] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.167903] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.300661] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[Mar18 14:22] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.068417] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.262601] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +5.669539] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.522100] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 14:27] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.818253] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +4.727516] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.584602] systemd-fstab-generator[4133]: Ignoring "noauto" option for root device
	[ +12.986399] systemd-fstab-generator[4319]: Ignoring "noauto" option for root device
	[  +0.139683] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 14:28] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [1f877d2b9b5d6969de266d253c6bb56f65a5e7309be6635d093ab1a2b18b7ae2] <==
	{"level":"info","ts":"2024-03-18T14:27:15.468922Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.40:2380"}
	{"level":"info","ts":"2024-03-18T14:27:15.477504Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1931abef9148948f","local-member-id":"b6d9f7a4f9cc11dd","added-peer-id":"b6d9f7a4f9cc11dd","added-peer-peer-urls":["https://192.168.61.40:2380"]}
	{"level":"info","ts":"2024-03-18T14:27:15.477789Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b6d9f7a4f9cc11dd","initial-advertise-peer-urls":["https://192.168.61.40:2380"],"listen-peer-urls":["https://192.168.61.40:2380"],"advertise-client-urls":["https://192.168.61.40:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.40:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T14:27:15.477844Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T14:27:15.966879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T14:27:15.966952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T14:27:15.966982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd received MsgPreVoteResp from b6d9f7a4f9cc11dd at term 1"}
	{"level":"info","ts":"2024-03-18T14:27:15.966997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.967003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd received MsgVoteResp from b6d9f7a4f9cc11dd at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.967013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6d9f7a4f9cc11dd became leader at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.96702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6d9f7a4f9cc11dd elected leader b6d9f7a4f9cc11dd at term 2"}
	{"level":"info","ts":"2024-03-18T14:27:15.971008Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b6d9f7a4f9cc11dd","local-member-attributes":"{Name:no-preload-188109 ClientURLs:[https://192.168.61.40:2379]}","request-path":"/0/members/b6d9f7a4f9cc11dd/attributes","cluster-id":"1931abef9148948f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T14:27:15.971228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:27:15.971778Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:27:15.971973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T14:27:15.97401Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1931abef9148948f","local-member-id":"b6d9f7a4f9cc11dd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:27:15.974106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:27:15.973658Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T14:27:15.974134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:27:15.975547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T14:27:15.979346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.40:2379"}
	{"level":"info","ts":"2024-03-18T14:27:15.997399Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:37:16.071733Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":711}
	{"level":"info","ts":"2024-03-18T14:37:16.074503Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":711,"took":"1.989946ms","hash":2857860913}
	{"level":"info","ts":"2024-03-18T14:37:16.074678Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2857860913,"revision":711,"compact-revision":-1}
	
	
	==> kernel <==
	 14:41:04 up 19 min,  0 users,  load average: 0.19, 0.23, 0.19
	Linux no-preload-188109 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ad416fceca513ecfdac4be6680578fe9184fcaa64065021395d8fecb4878cab9] <==
	I0318 14:35:18.649402       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:37:17.651522       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:37:17.651794       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 14:37:18.652364       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 14:37:18.652386       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:37:18.652455       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:37:18.652466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 14:37:18.652485       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:37:18.653773       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:38:18.652756       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:38:18.652828       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:38:18.652838       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:38:18.654990       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:38:18.655090       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:38:18.655098       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:40:18.653014       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:40:18.653154       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:40:18.653198       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:40:18.655869       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:40:18.655973       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:40:18.656003       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [281a4e15f233893d5c7e8f484155034e7cd3a218baf225e18b560cd195552645] <==
	I0318 14:35:33.413681       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:36:02.896648       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:36:03.423292       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:36:32.903206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:36:33.435742       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:37:02.908806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:37:03.446944       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:37:32.916246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:37:33.455224       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:38:02.923092       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:38:03.469635       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:38:32.928552       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:38:33.478950       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:38:36.153916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="220.079µs"
	I0318 14:38:48.144124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="90.99µs"
	E0318 14:39:02.935380       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:39:03.489042       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:39:32.942043       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:39:33.497272       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:40:02.947923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:40:03.507019       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:40:32.955037       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:40:33.515872       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:41:02.966206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:41:03.528319       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7cdf1dc2f84584bb32188b32b39e9b1dadc3d7c0224ddcbcbd8bb13189be0bc6] <==
	I0318 14:27:35.300112       1 server_others.go:72] "Using iptables proxy"
	I0318 14:27:35.384922       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.40"]
	I0318 14:27:35.643433       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 14:27:35.643520       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 14:27:35.643547       1 server_others.go:168] "Using iptables Proxier"
	I0318 14:27:35.653546       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 14:27:35.653926       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 14:27:35.654251       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 14:27:35.658942       1 config.go:188] "Starting service config controller"
	I0318 14:27:35.658994       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 14:27:35.659020       1 config.go:97] "Starting endpoint slice config controller"
	I0318 14:27:35.659024       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 14:27:35.659489       1 config.go:315] "Starting node config controller"
	I0318 14:27:35.659531       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 14:27:35.760252       1 shared_informer.go:318] Caches are synced for node config
	I0318 14:27:35.760330       1 shared_informer.go:318] Caches are synced for service config
	I0318 14:27:35.760365       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cc83d2a0284ea679e26d984aaf8bef522a10e4d8274c4246f0326bcc74476625] <==
	W0318 14:27:18.506124       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.506226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:18.640555       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 14:27:18.640667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 14:27:18.648601       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 14:27:18.648682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 14:27:18.658901       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 14:27:18.659564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 14:27:18.708662       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.709310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:18.745425       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 14:27:18.745575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 14:27:18.788058       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 14:27:18.788172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 14:27:18.819684       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 14:27:18.819873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 14:27:18.846402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.846501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:18.880531       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 14:27:18.880768       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 14:27:18.939147       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 14:27:18.939431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 14:27:19.139145       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 14:27:19.139354       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 14:27:20.840228       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:38:24 no-preload-188109 kubelet[4140]: E0318 14:38:24.146100    4140 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-w96bl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9hjss_kube-system(87eb7974-1ffa-40d4-bb06-4963e92e1c7f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 18 14:38:24 no-preload-188109 kubelet[4140]: E0318 14:38:24.146324    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:38:36 no-preload-188109 kubelet[4140]: E0318 14:38:36.126200    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:38:48 no-preload-188109 kubelet[4140]: E0318 14:38:48.126054    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:39:00 no-preload-188109 kubelet[4140]: E0318 14:39:00.125944    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:39:12 no-preload-188109 kubelet[4140]: E0318 14:39:12.126486    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:39:21 no-preload-188109 kubelet[4140]: E0318 14:39:21.173085    4140 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:39:21 no-preload-188109 kubelet[4140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:39:21 no-preload-188109 kubelet[4140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:39:21 no-preload-188109 kubelet[4140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:39:21 no-preload-188109 kubelet[4140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:39:25 no-preload-188109 kubelet[4140]: E0318 14:39:25.125762    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:39:40 no-preload-188109 kubelet[4140]: E0318 14:39:40.126660    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:39:51 no-preload-188109 kubelet[4140]: E0318 14:39:51.127671    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:40:02 no-preload-188109 kubelet[4140]: E0318 14:40:02.126431    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:40:14 no-preload-188109 kubelet[4140]: E0318 14:40:14.126777    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:40:21 no-preload-188109 kubelet[4140]: E0318 14:40:21.171786    4140 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:40:21 no-preload-188109 kubelet[4140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:40:21 no-preload-188109 kubelet[4140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:40:21 no-preload-188109 kubelet[4140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:40:21 no-preload-188109 kubelet[4140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:40:27 no-preload-188109 kubelet[4140]: E0318 14:40:27.127402    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:40:38 no-preload-188109 kubelet[4140]: E0318 14:40:38.126088    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:40:49 no-preload-188109 kubelet[4140]: E0318 14:40:49.125812    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	Mar 18 14:41:03 no-preload-188109 kubelet[4140]: E0318 14:41:03.127075    4140 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9hjss" podUID="87eb7974-1ffa-40d4-bb06-4963e92e1c7f"
	
	
	==> storage-provisioner [ed6e4fe42941b75e00ee31ae83067d7cce9e35e61832045df43b19ce9e57215b] <==
	I0318 14:27:36.170754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 14:27:36.182196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 14:27:36.182258       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 14:27:36.195214       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 14:27:36.195308       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe2db88c-b0a7-4f9b-a9db-6073f267d102", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-188109_9ff19ba4-0a18-4f37-a93c-ad8138b634cb became leader
	I0318 14:27:36.195667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-188109_9ff19ba4-0a18-4f37-a93c-ad8138b634cb!
	I0318 14:27:36.297047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-188109_9ff19ba4-0a18-4f37-a93c-ad8138b634cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188109 -n no-preload-188109
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-188109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9hjss
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-188109 describe pod metrics-server-57f55c9bc5-9hjss
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-188109 describe pod metrics-server-57f55c9bc5-9hjss: exit status 1 (68.919564ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9hjss" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-188109 describe pod metrics-server-57f55c9bc5-9hjss: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (263.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (114.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:39:00.517181 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:39:17.919073 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:40:12.747709 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/custom-flannel-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
E0318 14:40:32.237475 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.229:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (256.60189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-782728" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-782728 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-782728 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.735µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-782728 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (252.518151ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-782728 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-782728 logs -n 25: (1.611181885s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-059272 sudo find                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo find                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-059272 sudo crio                             | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-059272 sudo crio                            | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-059272                                       | bridge-059272                | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| delete  | -p flannel-059272                                      | flannel-059272               | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-784874 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:11 UTC |
	|         | disable-driver-mounts-784874                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:11 UTC | 18 Mar 24 14:14 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-188109             | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767719            | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC | 18 Mar 24 14:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075922  | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC | 18 Mar 24 14:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:14 UTC |                     |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-782728        | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-188109                  | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-188109                                   | no-preload-188109            | jenkins | v1.32.0 | 18 Mar 24 14:15 UTC | 18 Mar 24 14:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767719                 | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767719                                  | embed-certs-767719           | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-075922       | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075922 | jenkins | v1.32.0 | 18 Mar 24 14:16 UTC | 18 Mar 24 14:26 UTC |
	|         | default-k8s-diff-port-075922                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-782728             | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC | 18 Mar 24 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-782728                              | old-k8s-version-782728       | jenkins | v1.32.0 | 18 Mar 24 14:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:17:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:17:21.149860 1129259 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:17:21.150009 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150020 1129259 out.go:304] Setting ErrFile to fd 2...
	I0318 14:17:21.150027 1129259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:17:21.150261 1129259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:17:21.150831 1129259 out.go:298] Setting JSON to false
	I0318 14:17:21.151818 1129259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21588,"bootTime":1710749853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:17:21.151904 1129259 start.go:139] virtualization: kvm guest
	I0318 14:17:21.154086 1129259 out.go:177] * [old-k8s-version-782728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:17:21.155595 1129259 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:17:21.157136 1129259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:17:21.155603 1129259 notify.go:220] Checking for updates...
	I0318 14:17:21.160112 1129259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:17:21.161672 1129259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:17:21.163212 1129259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:17:21.164653 1129259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:17:21.166692 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:17:21.167108 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.167176 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.182529 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0318 14:17:21.183003 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.183578 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.183602 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.183959 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.184192 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.186217 1129259 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 14:17:21.187902 1129259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:17:21.188243 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:17:21.188288 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:17:21.204193 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 14:17:21.204646 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:17:21.205226 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:17:21.205262 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:17:21.205658 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:17:21.205879 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:17:21.243555 1129259 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:17:21.244857 1129259 start.go:297] selected driver: kvm2
	I0318 14:17:21.244882 1129259 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.245008 1129259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:17:21.245726 1129259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.245812 1129259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:17:21.261810 1129259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:17:21.262852 1129259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:17:21.262962 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:17:21.262975 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:17:21.263064 1129259 start.go:340] cluster config:
	{Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:17:21.263366 1129259 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:17:21.265819 1129259 out.go:177] * Starting "old-k8s-version-782728" primary control-plane node in "old-k8s-version-782728" cluster
	I0318 14:17:24.228169 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:21.267156 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:17:21.267198 1129259 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 14:17:21.267214 1129259 cache.go:56] Caching tarball of preloaded images
	I0318 14:17:21.267311 1129259 preload.go:173] Found /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:17:21.267327 1129259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 14:17:21.267448 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:17:21.267695 1129259 start.go:360] acquireMachinesLock for old-k8s-version-782728: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:17:27.300185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:33.380164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:36.452102 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:42.536087 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:45.604211 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:51.684168 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:17:54.756227 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:00.836108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:03.908246 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:09.988223 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:13.060123 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:19.140179 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:22.212209 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:28.292206 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:31.364121 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:37.444195 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:40.516108 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:46.596160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:49.668120 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:55.748134 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:18:58.820202 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:04.900183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:07.972128 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:14.052140 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:17.124242 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:23.204175 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:26.276172 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:32.356183 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:35.428256 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:41.508181 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:44.580142 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:50.660193 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:53.732160 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:19:59.812151 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:02.884164 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:08.964174 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:12.036185 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:18.116178 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:21.188147 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:27.268137 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:30.340177 1128583 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.40:22: connect: no route to host
	I0318 14:20:33.345074 1128788 start.go:364] duration metric: took 4m12.599457373s to acquireMachinesLock for "embed-certs-767719"
	I0318 14:20:33.345136 1128788 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:33.345145 1128788 fix.go:54] fixHost starting: 
	I0318 14:20:33.345584 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:33.345638 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:33.362007 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0318 14:20:33.362504 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:33.363014 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:20:33.363037 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:33.363432 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:33.363634 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:33.363787 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:20:33.365593 1128788 fix.go:112] recreateIfNeeded on embed-certs-767719: state=Stopped err=<nil>
	I0318 14:20:33.365619 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	W0318 14:20:33.365792 1128788 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:33.367525 1128788 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767719" ...
	I0318 14:20:33.368930 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Start
	I0318 14:20:33.369145 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring networks are active...
	I0318 14:20:33.370041 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network default is active
	I0318 14:20:33.370474 1128788 main.go:141] libmachine: (embed-certs-767719) Ensuring network mk-embed-certs-767719 is active
	I0318 14:20:33.370832 1128788 main.go:141] libmachine: (embed-certs-767719) Getting domain xml...
	I0318 14:20:33.371609 1128788 main.go:141] libmachine: (embed-certs-767719) Creating domain...
	I0318 14:20:34.596425 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting to get IP...
	I0318 14:20:34.597292 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.597677 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.597753 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.597666 1130210 retry.go:31] will retry after 244.312377ms: waiting for machine to come up
	I0318 14:20:34.843360 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:34.844039 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:34.844082 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:34.843988 1130210 retry.go:31] will retry after 388.782007ms: waiting for machine to come up
	I0318 14:20:35.234931 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.235304 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.235334 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.235252 1130210 retry.go:31] will retry after 449.871291ms: waiting for machine to come up
	I0318 14:20:33.342334 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:33.342408 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.342790 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:20:33.342823 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:20:33.343061 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:20:33.344920 1128583 machine.go:97] duration metric: took 4m37.408911801s to provisionDockerMachine
	I0318 14:20:33.344982 1128583 fix.go:56] duration metric: took 4m37.431584024s for fixHost
	I0318 14:20:33.344992 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 4m37.431613044s
	W0318 14:20:33.345017 1128583 start.go:713] error starting host: provision: host is not running
	W0318 14:20:33.345209 1128583 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 14:20:33.345223 1128583 start.go:728] Will try again in 5 seconds ...
	I0318 14:20:35.687048 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:35.687565 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:35.687604 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:35.687508 1130210 retry.go:31] will retry after 470.225551ms: waiting for machine to come up
	I0318 14:20:36.159138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.159642 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.159668 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.159590 1130210 retry.go:31] will retry after 638.634635ms: waiting for machine to come up
	I0318 14:20:36.799431 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:36.799820 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:36.799857 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:36.799764 1130210 retry.go:31] will retry after 758.659569ms: waiting for machine to come up
	I0318 14:20:37.559752 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:37.560189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:37.560224 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:37.560116 1130210 retry.go:31] will retry after 1.163344023s: waiting for machine to come up
	I0318 14:20:38.724981 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:38.725498 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:38.725561 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:38.725341 1130210 retry.go:31] will retry after 1.155934539s: waiting for machine to come up
	I0318 14:20:39.882622 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:39.883025 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:39.883074 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:39.882966 1130210 retry.go:31] will retry after 1.832023161s: waiting for machine to come up
	I0318 14:20:38.347296 1128583 start.go:360] acquireMachinesLock for no-preload-188109: {Name:mkab080da72017f9265de5adb0ea5a5114c7ddcc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:20:41.717138 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:41.717723 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:41.717757 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:41.717642 1130210 retry.go:31] will retry after 1.526824443s: waiting for machine to come up
	I0318 14:20:43.246389 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:43.246960 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:43.246997 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:43.246901 1130210 retry.go:31] will retry after 2.608273558s: waiting for machine to come up
	I0318 14:20:45.858375 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:45.858919 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:45.858943 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:45.858871 1130210 retry.go:31] will retry after 2.272908905s: waiting for machine to come up
	I0318 14:20:48.134345 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:48.134774 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | unable to find current IP address of domain embed-certs-767719 in network mk-embed-certs-767719
	I0318 14:20:48.134826 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | I0318 14:20:48.134739 1130210 retry.go:31] will retry after 3.671073699s: waiting for machine to come up
	I0318 14:20:53.273198 1128964 start.go:364] duration metric: took 4m11.791347901s to acquireMachinesLock for "default-k8s-diff-port-075922"
	I0318 14:20:53.273284 1128964 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:20:53.273295 1128964 fix.go:54] fixHost starting: 
	I0318 14:20:53.273834 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:20:53.273879 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:20:53.291440 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0318 14:20:53.291988 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:20:53.292571 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:20:53.292605 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:20:53.292931 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:20:53.293125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:20:53.293278 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:20:53.294856 1128964 fix.go:112] recreateIfNeeded on default-k8s-diff-port-075922: state=Stopped err=<nil>
	I0318 14:20:53.294889 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	W0318 14:20:53.295063 1128964 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:20:53.297784 1128964 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075922" ...
	I0318 14:20:51.809859 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.810477 1128788 main.go:141] libmachine: (embed-certs-767719) Found IP for machine: 192.168.72.45
	I0318 14:20:51.810503 1128788 main.go:141] libmachine: (embed-certs-767719) Reserving static IP address...
	I0318 14:20:51.810518 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has current primary IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.811061 1128788 main.go:141] libmachine: (embed-certs-767719) Reserved static IP address: 192.168.72.45
	I0318 14:20:51.811104 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.811112 1128788 main.go:141] libmachine: (embed-certs-767719) Waiting for SSH to be available...
	I0318 14:20:51.811137 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | skip adding static IP to network mk-embed-certs-767719 - found existing host DHCP lease matching {name: "embed-certs-767719", mac: "52:54:00:86:ad:e4", ip: "192.168.72.45"}
	I0318 14:20:51.811163 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Getting to WaitForSSH function...
	I0318 14:20:51.813739 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814076 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.814121 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.814189 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH client type: external
	I0318 14:20:51.814225 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa (-rw-------)
	I0318 14:20:51.814282 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:20:51.814327 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | About to run SSH command:
	I0318 14:20:51.814346 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | exit 0
	I0318 14:20:51.944192 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | SSH cmd err, output: <nil>: 
	I0318 14:20:51.944624 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetConfigRaw
	I0318 14:20:51.945477 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:51.948244 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.948667 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.948711 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.949069 1128788 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/config.json ...
	I0318 14:20:51.949305 1128788 machine.go:94] provisionDockerMachine start ...
	I0318 14:20:51.949327 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:51.949596 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:51.952267 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952653 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:51.952703 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:51.952836 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:51.953047 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953200 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:51.953376 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:51.953525 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:51.953772 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:51.953785 1128788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:20:52.068806 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:20:52.068847 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069162 1128788 buildroot.go:166] provisioning hostname "embed-certs-767719"
	I0318 14:20:52.069198 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.069500 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.072258 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072750 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.072785 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.072939 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.073146 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073312 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.073492 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.073730 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.073916 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.073934 1128788 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767719 && echo "embed-certs-767719" | sudo tee /etc/hostname
	I0318 14:20:52.204197 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767719
	
	I0318 14:20:52.204258 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.207520 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.207927 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.207959 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.208178 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.208478 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208740 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.208961 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.209164 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.209352 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.209370 1128788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767719/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:20:52.337185 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:20:52.337220 1128788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:20:52.337243 1128788 buildroot.go:174] setting up certificates
	I0318 14:20:52.337253 1128788 provision.go:84] configureAuth start
	I0318 14:20:52.337264 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetMachineName
	I0318 14:20:52.337561 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:52.340693 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341061 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.341098 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.341280 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.343239 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343570 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.343595 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.343709 1128788 provision.go:143] copyHostCerts
	I0318 14:20:52.343782 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:20:52.343794 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:20:52.343888 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:20:52.344001 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:20:52.344010 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:20:52.344038 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:20:52.344095 1128788 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:20:52.344103 1128788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:20:52.344126 1128788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:20:52.344220 1128788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767719 san=[127.0.0.1 192.168.72.45 embed-certs-767719 localhost minikube]
	I0318 14:20:52.550241 1128788 provision.go:177] copyRemoteCerts
	I0318 14:20:52.550380 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:20:52.550433 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.553182 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553591 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.553626 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.553824 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.554056 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.554241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.554392 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:52.645341 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:20:52.672476 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:20:52.698609 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:20:52.724434 1128788 provision.go:87] duration metric: took 387.165868ms to configureAuth
	I0318 14:20:52.724471 1128788 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:20:52.724727 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:20:52.724827 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:52.727323 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727700 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:52.727764 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:52.727882 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:52.728098 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:52.728443 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:52.728626 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:52.728859 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:52.728878 1128788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:20:53.012918 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:20:53.012959 1128788 machine.go:97] duration metric: took 1.063639009s to provisionDockerMachine
	I0318 14:20:53.012976 1128788 start.go:293] postStartSetup for "embed-certs-767719" (driver="kvm2")
	I0318 14:20:53.012990 1128788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:20:53.013039 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.013471 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:20:53.013505 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.016524 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.016929 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.016961 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.017153 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.017372 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.017582 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.017846 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.107977 1128788 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:20:53.113146 1128788 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:20:53.113184 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:20:53.113302 1128788 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:20:53.113423 1128788 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:20:53.113558 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:20:53.125166 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:53.152094 1128788 start.go:296] duration metric: took 139.099686ms for postStartSetup
	I0318 14:20:53.152147 1128788 fix.go:56] duration metric: took 19.807001958s for fixHost
	I0318 14:20:53.152194 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.155058 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155371 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.155401 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.155643 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.155908 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156138 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.156307 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.156536 1128788 main.go:141] libmachine: Using SSH client type: native
	I0318 14:20:53.156770 1128788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.45 22 <nil> <nil>}
	I0318 14:20:53.156786 1128788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:20:53.272998 1128788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771653.240528844
	
	I0318 14:20:53.273029 1128788 fix.go:216] guest clock: 1710771653.240528844
	I0318 14:20:53.273046 1128788 fix.go:229] Guest: 2024-03-18 14:20:53.240528844 +0000 UTC Remote: 2024-03-18 14:20:53.15215228 +0000 UTC m=+272.563569050 (delta=88.376564ms)
	I0318 14:20:53.273075 1128788 fix.go:200] guest clock delta is within tolerance: 88.376564ms
	I0318 14:20:53.273083 1128788 start.go:83] releasing machines lock for "embed-certs-767719", held for 19.927965733s
	I0318 14:20:53.273118 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.273431 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:53.276309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276740 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.276768 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.276958 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277493 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277716 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:20:53.277806 1128788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:20:53.277851 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.277976 1128788 ssh_runner.go:195] Run: cat /version.json
	I0318 14:20:53.278002 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:20:53.280799 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.280853 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281234 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281263 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281289 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:53.281309 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:53.281518 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281616 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:20:53.281767 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281850 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:20:53.281945 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282028 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:20:53.282090 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.282179 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:20:53.386584 1128788 ssh_runner.go:195] Run: systemctl --version
	I0318 14:20:53.393371 1128788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:20:53.547565 1128788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:20:53.554182 1128788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:20:53.554266 1128788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:20:53.573031 1128788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:20:53.573071 1128788 start.go:494] detecting cgroup driver to use...
	I0318 14:20:53.573197 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:20:53.591649 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:20:53.607279 1128788 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:20:53.607359 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:20:53.624327 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:20:53.640398 1128788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:20:53.759979 1128788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:20:53.931294 1128788 docker.go:233] disabling docker service ...
	I0318 14:20:53.931381 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:20:53.954433 1128788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:20:53.969396 1128788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:20:54.107898 1128788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:20:54.241874 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:20:54.257748 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:20:54.278981 1128788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:20:54.279057 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.293329 1128788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:20:54.293390 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.304838 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.316646 1128788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:20:54.328623 1128788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:20:54.340540 1128788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:20:54.352368 1128788 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:20:54.352433 1128788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:20:54.368965 1128788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:20:54.389268 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:54.511182 1128788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:20:54.657685 1128788 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:20:54.657798 1128788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:20:54.663591 1128788 start.go:562] Will wait 60s for crictl version
	I0318 14:20:54.663670 1128788 ssh_runner.go:195] Run: which crictl
	I0318 14:20:54.667903 1128788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:20:54.707961 1128788 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:20:54.708065 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.738240 1128788 ssh_runner.go:195] Run: crio --version
	I0318 14:20:54.773562 1128788 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:20:54.775286 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetIP
	I0318 14:20:54.778784 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779228 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:20:54.779265 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:20:54.779498 1128788 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:20:54.784575 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:54.799207 1128788 kubeadm.go:877] updating cluster {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:20:54.799380 1128788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:20:54.799440 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:54.839309 1128788 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:20:54.839387 1128788 ssh_runner.go:195] Run: which lz4
	I0318 14:20:54.844323 1128788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:20:54.850487 1128788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:20:54.850524 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:20:53.299380 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Start
	I0318 14:20:53.299595 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring networks are active...
	I0318 14:20:53.300497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network default is active
	I0318 14:20:53.300887 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Ensuring network mk-default-k8s-diff-port-075922 is active
	I0318 14:20:53.301316 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Getting domain xml...
	I0318 14:20:53.302079 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Creating domain...
	I0318 14:20:54.607619 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting to get IP...
	I0318 14:20:54.608510 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609075 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.609160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.609050 1130331 retry.go:31] will retry after 282.377323ms: waiting for machine to come up
	I0318 14:20:54.892766 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:54.893323 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:54.893259 1130331 retry.go:31] will retry after 264.840581ms: waiting for machine to come up
	I0318 14:20:55.160018 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160536 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.160578 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.160460 1130331 retry.go:31] will retry after 402.458985ms: waiting for machine to come up
	I0318 14:20:55.564282 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564773 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.564804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.564727 1130331 retry.go:31] will retry after 382.70672ms: waiting for machine to come up
	I0318 14:20:55.949676 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950183 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:55.950218 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:55.950122 1130331 retry.go:31] will retry after 676.466466ms: waiting for machine to come up
	I0318 14:20:56.798325 1128788 crio.go:444] duration metric: took 1.954051074s to copy over tarball
	I0318 14:20:56.798418 1128788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:20:59.431722 1128788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633260911s)
	I0318 14:20:59.431777 1128788 crio.go:451] duration metric: took 2.633417573s to extract the tarball
	I0318 14:20:59.431788 1128788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:20:59.476265 1128788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:20:59.534130 1128788 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:20:59.534161 1128788 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:20:59.534173 1128788 kubeadm.go:928] updating node { 192.168.72.45 8443 v1.28.4 crio true true} ...
	I0318 14:20:59.534357 1128788 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:20:59.534499 1128788 ssh_runner.go:195] Run: crio config
	I0318 14:20:59.594778 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:20:59.594814 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:20:59.594831 1128788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:20:59.594894 1128788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767719 NodeName:embed-certs-767719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:20:59.595092 1128788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767719"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:20:59.595203 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:20:59.610298 1128788 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:20:59.610388 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:20:59.624050 1128788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 14:20:59.644283 1128788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:20:59.663987 1128788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0318 14:20:59.685379 1128788 ssh_runner.go:195] Run: grep 192.168.72.45	control-plane.minikube.internal$ /etc/hosts
	I0318 14:20:59.690360 1128788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:20:59.705657 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:20:59.839158 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:20:59.857617 1128788 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719 for IP: 192.168.72.45
	I0318 14:20:59.857642 1128788 certs.go:194] generating shared ca certs ...
	I0318 14:20:59.857674 1128788 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:20:59.857839 1128788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:20:59.857882 1128788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:20:59.857893 1128788 certs.go:256] generating profile certs ...
	I0318 14:20:59.858006 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/client.key
	I0318 14:20:59.858061 1128788 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key.f59f641c
	I0318 14:20:59.858098 1128788 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key
	I0318 14:20:59.858268 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:20:59.858301 1128788 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:20:59.858308 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:20:59.858331 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:20:59.858360 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:20:59.858382 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:20:59.858424 1128788 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:20:59.859110 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:20:59.901101 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:20:59.947010 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:20:59.990882 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:00.032358 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 14:21:00.070194 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:00.108670 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:00.137760 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/embed-certs-767719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:00.168481 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:00.199292 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:00.228315 1128788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:00.257409 1128788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:00.277720 1128788 ssh_runner.go:195] Run: openssl version
	I0318 14:21:00.284138 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:00.296443 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302083 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.302160 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:00.308748 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:00.322025 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:00.334654 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340319 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.340404 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:00.347454 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:00.359627 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:00.371865 1128788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377236 1128788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.377335 1128788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:00.387041 1128788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:00.404525 1128788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:00.412919 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:00.422577 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:00.434217 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:00.444535 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:00.452863 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:00.459979 1128788 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:00.467503 1128788 kubeadm.go:391] StartCluster: {Name:embed-certs-767719 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-767719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:00.467680 1128788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:00.467780 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.507833 1128788 cri.go:89] found id: ""
	I0318 14:21:00.507926 1128788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:00.519958 1128788 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:00.519982 1128788 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:00.520011 1128788 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:00.520066 1128788 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:00.532229 1128788 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:00.533479 1128788 kubeconfig.go:125] found "embed-certs-767719" server: "https://192.168.72.45:8443"
	I0318 14:21:00.536185 1128788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:00.548434 1128788 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.45
	I0318 14:21:00.548484 1128788 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:00.548498 1128788 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:00.548551 1128788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:00.592096 1128788 cri.go:89] found id: ""
	I0318 14:21:00.592168 1128788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:00.610826 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:00.622294 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:00.622330 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:00.622386 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:00.633009 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:00.633089 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:20:56.628134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628708 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:56.628747 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:56.628643 1130331 retry.go:31] will retry after 703.45784ms: waiting for machine to come up
	I0318 14:20:57.334203 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334666 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:57.334702 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:57.334600 1130331 retry.go:31] will retry after 1.177266521s: waiting for machine to come up
	I0318 14:20:58.513803 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514452 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:58.514485 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:58.514389 1130331 retry.go:31] will retry after 1.389627955s: waiting for machine to come up
	I0318 14:20:59.906109 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906663 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:20:59.906750 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:20:59.906632 1130331 retry.go:31] will retry after 1.239662517s: waiting for machine to come up
	I0318 14:21:01.147929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:01.148325 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:01.148248 1130331 retry.go:31] will retry after 2.183067358s: waiting for machine to come up
	I0318 14:21:00.644684 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:00.921213 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:00.921307 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:00.932412 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.943408 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:00.943481 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:00.955574 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:00.966416 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:00.966483 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:00.978014 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:00.993622 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:01.128726 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.331974 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.203164646s)
	I0318 14:21:02.332035 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.574592 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.686011 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:02.821189 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:02.821373 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.322200 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.822207 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:03.838586 1128788 api_server.go:72] duration metric: took 1.017395673s to wait for apiserver process to appear ...
	I0318 14:21:03.838622 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:03.838660 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.839282 1128788 api_server.go:269] stopped: https://192.168.72.45:8443/healthz: Get "https://192.168.72.45:8443/healthz": dial tcp 192.168.72.45:8443: connect: connection refused
	I0318 14:21:04.339675 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:03.333080 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333620 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:03.333648 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:03.333583 1130331 retry.go:31] will retry after 2.259124316s: waiting for machine to come up
	I0318 14:21:05.594356 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:05.594823 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:05.594754 1130331 retry.go:31] will retry after 2.492274875s: waiting for machine to come up
	I0318 14:21:07.054330 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:07.054373 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:07.054392 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.073841 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.073894 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.339285 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.345307 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.345340 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:07.838915 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:07.846722 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:07.846759 1128788 api_server.go:103] status: https://192.168.72.45:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:08.339409 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:21:08.344790 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:21:08.358050 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:08.358097 1128788 api_server.go:131] duration metric: took 4.519466088s to wait for apiserver health ...
	I0318 14:21:08.358121 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:21:08.358130 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:08.359982 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:08.361428 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:08.378195 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:08.409269 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:08.421874 1128788 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:08.421960 1128788 system_pods.go:61] "coredns-5dd5756b68-4dmw2" [324897fc-dd26-47f1-b8bc-4d2ed721a576] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:08.421971 1128788 system_pods.go:61] "etcd-embed-certs-767719" [df147cb8-989c-408d-ade8-547858a8c2bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:08.421982 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [82f7d170-3b3c-448c-b824-6d263c5c1128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:08.421989 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [cd4dd4f3-a727-4864-b0e9-a89758537de9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:08.422002 1128788 system_pods.go:61] "kube-proxy-mtx9w" [b46b48ff-e4c0-4595-82c4-7c0c86103262] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:08.422010 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [63774f42-c85e-467f-9bd3-0c78d44b2681] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:08.422022 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-jr9wp" [e40748e2-ebc3-4c4f-a9cc-01bbc7416f35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:08.422030 1128788 system_pods.go:61] "storage-provisioner" [1b51e6a7-2693-4d0b-b47e-ccbcb1e46424] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:08.422047 1128788 system_pods.go:74] duration metric: took 12.746875ms to wait for pod list to return data ...
	I0318 14:21:08.422058 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:08.432361 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:08.432461 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:08.432483 1128788 node_conditions.go:105] duration metric: took 10.415127ms to run NodePressure ...
	I0318 14:21:08.432524 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:08.730544 1128788 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:08.735970 1128788 kubeadm.go:733] kubelet initialised
	I0318 14:21:08.736001 1128788 kubeadm.go:734] duration metric: took 5.422027ms waiting for restarted kubelet to initialise ...
	I0318 14:21:08.736042 1128788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:08.745586 1128788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:08.090446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090804 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | unable to find current IP address of domain default-k8s-diff-port-075922 in network mk-default-k8s-diff-port-075922
	I0318 14:21:08.090834 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | I0318 14:21:08.090779 1130331 retry.go:31] will retry after 3.31085892s: waiting for machine to come up
	I0318 14:21:12.749494 1129259 start.go:364] duration metric: took 3m51.481737314s to acquireMachinesLock for "old-k8s-version-782728"
	I0318 14:21:12.749582 1129259 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:12.749596 1129259 fix.go:54] fixHost starting: 
	I0318 14:21:12.750059 1129259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:12.750110 1129259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:12.772262 1129259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0318 14:21:12.772787 1129259 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:12.773383 1129259 main.go:141] libmachine: Using API Version  1
	I0318 14:21:12.773408 1129259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:12.773864 1129259 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:12.774101 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:12.774261 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetState
	I0318 14:21:12.776193 1129259 fix.go:112] recreateIfNeeded on old-k8s-version-782728: state=Stopped err=<nil>
	I0318 14:21:12.776227 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	W0318 14:21:12.776377 1129259 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:12.778538 1129259 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-782728" ...
	I0318 14:21:11.405935 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406497 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has current primary IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.406539 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Found IP for machine: 192.168.83.39
	I0318 14:21:11.406553 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserving static IP address...
	I0318 14:21:11.407015 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.407048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | skip adding static IP to network mk-default-k8s-diff-port-075922 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075922", mac: "52:54:00:c5:53:d5", ip: "192.168.83.39"}
	I0318 14:21:11.407066 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Reserved static IP address: 192.168.83.39
	I0318 14:21:11.407081 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Waiting for SSH to be available...
	I0318 14:21:11.407093 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Getting to WaitForSSH function...
	I0318 14:21:11.409327 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409674 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.409706 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.409895 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH client type: external
	I0318 14:21:11.409919 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa (-rw-------)
	I0318 14:21:11.410034 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:11.410065 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | About to run SSH command:
	I0318 14:21:11.410089 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | exit 0
	I0318 14:21:11.544258 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:11.544698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetConfigRaw
	I0318 14:21:11.545370 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.548333 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.548729 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.548764 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.549053 1128964 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/config.json ...
	I0318 14:21:11.549275 1128964 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:11.549295 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:11.549533 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.551799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552156 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.552186 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.552280 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.552482 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552657 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.552797 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.552994 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.553261 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.553278 1128964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:11.665093 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:11.665132 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665456 1128964 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075922"
	I0318 14:21:11.665493 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.665730 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.668911 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.669413 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.669679 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.669923 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670134 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.670319 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.670530 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.670718 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.670734 1128964 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075922 && echo "default-k8s-diff-port-075922" | sudo tee /etc/hostname
	I0318 14:21:11.807520 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075922
	
	I0318 14:21:11.807552 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.810614 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811011 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.811047 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.811257 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:11.811480 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:11.811941 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:11.812155 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:11.812361 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:11.812387 1128964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075922/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:11.942984 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:11.943022 1128964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:11.943078 1128964 buildroot.go:174] setting up certificates
	I0318 14:21:11.943094 1128964 provision.go:84] configureAuth start
	I0318 14:21:11.943108 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetMachineName
	I0318 14:21:11.943441 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:11.946659 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947091 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.947125 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.947328 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:11.949852 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950275 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:11.950310 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:11.950496 1128964 provision.go:143] copyHostCerts
	I0318 14:21:11.950579 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:11.950596 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:11.950679 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:11.950859 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:11.950873 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:11.950898 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:11.950964 1128964 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:11.950971 1128964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:11.950988 1128964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:11.951041 1128964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075922 san=[127.0.0.1 192.168.83.39 default-k8s-diff-port-075922 localhost minikube]
	I0318 14:21:12.019678 1128964 provision.go:177] copyRemoteCerts
	I0318 14:21:12.019756 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:12.019788 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.023122 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023603 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.023639 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.023862 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.024077 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.024294 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.024445 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.112914 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:12.142575 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 14:21:12.171747 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:12.200144 1128964 provision.go:87] duration metric: took 257.034667ms to configureAuth
	I0318 14:21:12.200177 1128964 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:12.200401 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:21:12.200515 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.203573 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.203978 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.204019 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.204160 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.204379 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204658 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.204896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.205131 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.205335 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.205367 1128964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:12.494965 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:12.494997 1128964 machine.go:97] duration metric: took 945.707691ms to provisionDockerMachine
	I0318 14:21:12.495012 1128964 start.go:293] postStartSetup for "default-k8s-diff-port-075922" (driver="kvm2")
	I0318 14:21:12.495026 1128964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:12.495048 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.495450 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:12.495486 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.498444 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.498821 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.498928 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.499166 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.499363 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.499560 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.588350 1128964 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:12.593611 1128964 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:12.593638 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:12.593714 1128964 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:12.593788 1128964 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:12.593875 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:12.605751 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:12.633577 1128964 start.go:296] duration metric: took 138.54984ms for postStartSetup
	I0318 14:21:12.633621 1128964 fix.go:56] duration metric: took 19.360327899s for fixHost
	I0318 14:21:12.633645 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.636446 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636822 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.636850 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.636989 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.637237 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637428 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.637596 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.637786 1128964 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:12.637988 1128964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.83.39 22 <nil> <nil>}
	I0318 14:21:12.638002 1128964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:12.749326 1128964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771672.727120819
	
	I0318 14:21:12.749355 1128964 fix.go:216] guest clock: 1710771672.727120819
	I0318 14:21:12.749364 1128964 fix.go:229] Guest: 2024-03-18 14:21:12.727120819 +0000 UTC Remote: 2024-03-18 14:21:12.633625447 +0000 UTC m=+271.308784721 (delta=93.495372ms)
	I0318 14:21:12.749386 1128964 fix.go:200] guest clock delta is within tolerance: 93.495372ms
	I0318 14:21:12.749392 1128964 start.go:83] releasing machines lock for "default-k8s-diff-port-075922", held for 19.476136638s
	I0318 14:21:12.749418 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.749732 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:12.752996 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753471 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.753506 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.753815 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754448 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754651 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:21:12.754744 1128964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:12.754791 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.754943 1128964 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:12.754970 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:21:12.758153 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758303 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758628 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758660 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:12.758694 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:12.758758 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758927 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:21:12.758988 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759057 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:21:12.759157 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759251 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:21:12.759292 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.759371 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:21:12.841423 1128964 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:12.868154 1128964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:13.020652 1128964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:13.028168 1128964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:13.028267 1128964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:13.047225 1128964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:13.047264 1128964 start.go:494] detecting cgroup driver to use...
	I0318 14:21:13.047361 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:13.064518 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:13.080271 1128964 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:13.080356 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:13.095583 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:13.110387 1128964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:13.250934 1128964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:13.450657 1128964 docker.go:233] disabling docker service ...
	I0318 14:21:13.450738 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:13.471701 1128964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:13.488157 1128964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:13.644961 1128964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:13.811333 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:13.828584 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:13.852476 1128964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:13.852557 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.864849 1128964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:13.864951 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.877723 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.890337 1128964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:13.902558 1128964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:13.915858 1128964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:13.928426 1128964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:13.928526 1128964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:13.951761 1128964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:13.964785 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:14.144432 1128964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:14.311928 1128964 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:14.312078 1128964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:14.319279 1128964 start.go:562] Will wait 60s for crictl version
	I0318 14:21:14.319347 1128964 ssh_runner.go:195] Run: which crictl
	I0318 14:21:14.325325 1128964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:14.385244 1128964 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:14.385344 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.426242 1128964 ssh_runner.go:195] Run: crio --version
	I0318 14:21:14.460725 1128964 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 14:21:10.753176 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:12.756558 1128788 pod_ready.go:102] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:13.760252 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:13.760295 1128788 pod_ready.go:81] duration metric: took 5.014671723s for pod "coredns-5dd5756b68-4dmw2" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:13.760315 1128788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:12.780014 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .Start
	I0318 14:21:12.780429 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring networks are active...
	I0318 14:21:12.781303 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network default is active
	I0318 14:21:12.781644 1129259 main.go:141] libmachine: (old-k8s-version-782728) Ensuring network mk-old-k8s-version-782728 is active
	I0318 14:21:12.782077 1129259 main.go:141] libmachine: (old-k8s-version-782728) Getting domain xml...
	I0318 14:21:12.782826 1129259 main.go:141] libmachine: (old-k8s-version-782728) Creating domain...
	I0318 14:21:14.142992 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting to get IP...
	I0318 14:21:14.144199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.144824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.144851 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.144681 1130456 retry.go:31] will retry after 192.354686ms: waiting for machine to come up
	I0318 14:21:14.339303 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.339861 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.339886 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.339806 1130456 retry.go:31] will retry after 389.480557ms: waiting for machine to come up
	I0318 14:21:14.731567 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:14.732127 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:14.732163 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:14.732075 1130456 retry.go:31] will retry after 435.139168ms: waiting for machine to come up
	I0318 14:21:15.168657 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.169170 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.169209 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.169147 1130456 retry.go:31] will retry after 398.075576ms: waiting for machine to come up
	I0318 14:21:15.569132 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:15.569651 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:15.569699 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:15.569627 1130456 retry.go:31] will retry after 716.720722ms: waiting for machine to come up
	I0318 14:21:14.461974 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetIP
	I0318 14:21:14.465116 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465652 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:21:14.465696 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:21:14.465903 1128964 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:14.471039 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:14.486098 1128964 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:14.486307 1128964 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:21:14.486379 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:14.526373 1128964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 14:21:14.526476 1128964 ssh_runner.go:195] Run: which lz4
	I0318 14:21:14.531145 1128964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:14.536370 1128964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:14.536412 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 14:21:15.769517 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:17.772721 1128788 pod_ready.go:102] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:18.769552 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:18.769590 1128788 pod_ready.go:81] duration metric: took 5.009265127s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:18.769610 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:16.287569 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:16.288171 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:16.288208 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:16.288111 1130456 retry.go:31] will retry after 837.119291ms: waiting for machine to come up
	I0318 14:21:17.127197 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.127610 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.127641 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.127572 1130456 retry.go:31] will retry after 786.468871ms: waiting for machine to come up
	I0318 14:21:17.916280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:17.916885 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:17.916920 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:17.916827 1130456 retry.go:31] will retry after 1.219601482s: waiting for machine to come up
	I0318 14:21:19.137624 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:19.138092 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:19.138124 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:19.138038 1130456 retry.go:31] will retry after 1.236592895s: waiting for machine to come up
	I0318 14:21:20.376069 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:20.376549 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:20.376574 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:20.376518 1130456 retry.go:31] will retry after 2.101851485s: waiting for machine to come up
	I0318 14:21:16.505094 1128964 crio.go:444] duration metric: took 1.973996063s to copy over tarball
	I0318 14:21:16.505250 1128964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:19.251009 1128964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.745717226s)
	I0318 14:21:19.251045 1128964 crio.go:451] duration metric: took 2.745895394s to extract the tarball
	I0318 14:21:19.251053 1128964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:19.308392 1128964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:19.363143 1128964 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:21:19.363172 1128964 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:21:19.363181 1128964 kubeadm.go:928] updating node { 192.168.83.39 8444 v1.28.4 crio true true} ...
	I0318 14:21:19.363313 1128964 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-075922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:19.363415 1128964 ssh_runner.go:195] Run: crio config
	I0318 14:21:19.415995 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:19.416028 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:19.416048 1128964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:19.416085 1128964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.39 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075922 NodeName:default-k8s-diff-port-075922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:21:19.416297 1128964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.39
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-075922"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:19.416379 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:21:19.427340 1128964 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:19.427420 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:19.438470 1128964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0318 14:21:19.459945 1128964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:19.479728 1128964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0318 14:21:19.500079 1128964 ssh_runner.go:195] Run: grep 192.168.83.39	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:19.504746 1128964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:19.519931 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:19.654822 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:19.675414 1128964 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922 for IP: 192.168.83.39
	I0318 14:21:19.675443 1128964 certs.go:194] generating shared ca certs ...
	I0318 14:21:19.675462 1128964 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:19.675647 1128964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:19.675707 1128964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:19.675722 1128964 certs.go:256] generating profile certs ...
	I0318 14:21:19.675861 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/client.key
	I0318 14:21:19.683399 1128964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key.675162fd
	I0318 14:21:19.683522 1128964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key
	I0318 14:21:19.683667 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:19.683715 1128964 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:19.683730 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:19.683782 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:19.683870 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:19.683897 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:19.683940 1128964 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:19.684679 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:19.743065 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:19.787963 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:19.833491 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:19.865359 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 14:21:19.903294 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:21:19.932298 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:19.961860 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/default-k8s-diff-port-075922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 14:21:19.992150 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:20.020750 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:20.047780 1128964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:20.074566 1128964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:20.094524 1128964 ssh_runner.go:195] Run: openssl version
	I0318 14:21:20.101181 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:20.118970 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124628 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.124707 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:20.133462 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:20.150447 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:20.165864 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173488 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.173627 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:20.183147 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:20.200417 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:20.213973 1128964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219407 1128964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.219488 1128964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:20.226491 1128964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:20.240299 1128964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:20.245960 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:20.253073 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:20.260144 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:20.267546 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:20.274740 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:20.282502 1128964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:20.289722 1128964 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-075922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-075922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:20.289817 1128964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:20.289877 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.338941 1128964 cri.go:89] found id: ""
	I0318 14:21:20.339036 1128964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:20.350677 1128964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:20.350706 1128964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:20.350718 1128964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:20.350775 1128964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:20.362216 1128964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:20.363622 1128964 kubeconfig.go:125] found "default-k8s-diff-port-075922" server: "https://192.168.83.39:8444"
	I0318 14:21:20.366606 1128964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:20.379417 1128964 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.39
	I0318 14:21:20.379460 1128964 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:20.379481 1128964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:20.379556 1128964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:20.423139 1128964 cri.go:89] found id: ""
	I0318 14:21:20.423224 1128964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:20.444111 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:20.456698 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:20.456725 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:20.456787 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:21:20.467432 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:20.467502 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:20.478894 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:21:20.490123 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:20.490216 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:20.501744 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.514020 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:20.514084 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:20.526805 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:21:20.538374 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:20.538452 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:20.550880 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:20.562302 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:20.687288 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.085960 1128788 pod_ready.go:102] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:21.781260 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.781287 1128788 pod_ready.go:81] duration metric: took 3.011668835s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.781297 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789501 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.789537 1128788 pod_ready.go:81] duration metric: took 8.231402ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.789552 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797445 1128788 pod_ready.go:92] pod "kube-proxy-mtx9w" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.797483 1128788 pod_ready.go:81] duration metric: took 7.921289ms for pod "kube-proxy-mtx9w" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.797496 1128788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804084 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:21.804120 1128788 pod_ready.go:81] duration metric: took 6.613559ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:21.804132 1128788 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:23.812751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:22.480055 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:22.480767 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:22.480805 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:22.480700 1130456 retry.go:31] will retry after 2.377253243s: waiting for machine to come up
	I0318 14:21:24.861000 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:24.861459 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:24.861513 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:24.861440 1130456 retry.go:31] will retry after 2.768860765s: waiting for machine to come up
	I0318 14:21:21.432193 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.821781 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.899411 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:21.984494 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:21.984624 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.484985 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:22.985119 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:23.009700 1128964 api_server.go:72] duration metric: took 1.025195346s to wait for apiserver process to appear ...
	I0318 14:21:23.009739 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:21:23.009764 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:23.010328 1128964 api_server.go:269] stopped: https://192.168.83.39:8444/healthz: Get "https://192.168.83.39:8444/healthz": dial tcp 192.168.83.39:8444: connect: connection refused
	I0318 14:21:23.510606 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.307173 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.307217 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.307238 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.345507 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:21:26.345551 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:21:26.510350 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:26.515684 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:26.515721 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.010509 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.015492 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:21:27.015526 1128964 api_server.go:103] status: https://192.168.83.39:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:21:27.510772 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:21:27.520209 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:21:27.527945 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:21:27.527978 1128964 api_server.go:131] duration metric: took 4.518232257s to wait for apiserver health ...
	I0318 14:21:27.527988 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:21:27.527994 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:27.529779 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:21:26.313296 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:28.811916 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:27.633200 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:27.633774 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:27.633824 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:27.633712 1130456 retry.go:31] will retry after 2.743873993s: waiting for machine to come up
	I0318 14:21:30.380835 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:30.381280 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | unable to find current IP address of domain old-k8s-version-782728 in network mk-old-k8s-version-782728
	I0318 14:21:30.381314 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | I0318 14:21:30.381213 1130456 retry.go:31] will retry after 4.377164627s: waiting for machine to come up
	I0318 14:21:27.531259 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:21:27.573198 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:21:27.619813 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:21:27.629766 1128964 system_pods.go:59] 8 kube-system pods found
	I0318 14:21:27.629805 1128964 system_pods.go:61] "coredns-5dd5756b68-dsrcd" [86ac331d-2539-4fbb-8cf8-56f58afa6f99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:21:27.629815 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [0de3bd3b-6ee2-46e2-83f7-7c637115879f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:21:27.629821 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [e1e689c8-642c-428e-bddf-43c2c1524563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:21:27.629832 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [1a200d0f-53e6-4e44-a8b0-28b9d21f763e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:21:27.629837 1128964 system_pods.go:61] "kube-proxy-wbnvd" [6bf13050-a150-4133-93e2-71ddcad443ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:21:27.629842 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [87bc17b3-75c6-4d6b-9b8f-29823398100a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:21:27.629847 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-4vrvb" [d12dc531-720c-4a7a-93af-69b9005666fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:21:27.629852 1128964 system_pods.go:61] "storage-provisioner" [856896cd-daec-4873-8f9c-c7cadeb3c16e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:21:27.629857 1128964 system_pods.go:74] duration metric: took 10.000416ms to wait for pod list to return data ...
	I0318 14:21:27.629866 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:21:27.634112 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:21:27.634147 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:21:27.634159 1128964 node_conditions.go:105] duration metric: took 4.287491ms to run NodePressure ...
	I0318 14:21:27.634190 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:27.976277 1128964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980894 1128964 kubeadm.go:733] kubelet initialised
	I0318 14:21:27.980920 1128964 kubeadm.go:734] duration metric: took 4.609836ms waiting for restarted kubelet to initialise ...
	I0318 14:21:27.980932 1128964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:21:27.986151 1128964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:29.993963 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:31.313401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:33.811753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.760820 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Found IP for machine: 192.168.50.229
	I0318 14:21:34.761353 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has current primary IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.761362 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserving static IP address...
	I0318 14:21:34.761782 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.761820 1129259 main.go:141] libmachine: (old-k8s-version-782728) Reserved static IP address: 192.168.50.229
	I0318 14:21:34.761845 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | skip adding static IP to network mk-old-k8s-version-782728 - found existing host DHCP lease matching {name: "old-k8s-version-782728", mac: "52:54:00:bb:bf:3d", ip: "192.168.50.229"}
	I0318 14:21:34.761864 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Getting to WaitForSSH function...
	I0318 14:21:34.761881 1129259 main.go:141] libmachine: (old-k8s-version-782728) Waiting for SSH to be available...
	I0318 14:21:34.764073 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764333 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.764360 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.764532 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH client type: external
	I0318 14:21:34.764572 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa (-rw-------)
	I0318 14:21:34.764613 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:34.764631 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | About to run SSH command:
	I0318 14:21:34.764647 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | exit 0
	I0318 14:21:34.896449 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:34.896855 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetConfigRaw
	I0318 14:21:34.897582 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:34.899986 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900376 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.900416 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.900800 1129259 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/config.json ...
	I0318 14:21:34.901117 1129259 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:34.901147 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:34.901437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:34.904052 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904424 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:34.904452 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:34.904606 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:34.904785 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.904945 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:34.905107 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:34.905279 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:34.905513 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:34.905531 1129259 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:35.016717 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:35.016763 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017067 1129259 buildroot.go:166] provisioning hostname "old-k8s-version-782728"
	I0318 14:21:35.017099 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.017382 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.020497 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.020890 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.020924 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.021057 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.021277 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.021590 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.021849 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.022055 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.022070 1129259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-782728 && echo "old-k8s-version-782728" | sudo tee /etc/hostname
	I0318 14:21:35.147357 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-782728
	
	I0318 14:21:35.147390 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.150191 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150607 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.150636 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.150853 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.151114 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151347 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.151546 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.151781 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.152045 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.152072 1129259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-782728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-782728/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-782728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:35.275206 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:35.275240 1129259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:35.275285 1129259 buildroot.go:174] setting up certificates
	I0318 14:21:35.275295 1129259 provision.go:84] configureAuth start
	I0318 14:21:35.275306 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetMachineName
	I0318 14:21:35.275669 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:35.278614 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279090 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.279130 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.279354 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.282199 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282559 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.282595 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.282756 1129259 provision.go:143] copyHostCerts
	I0318 14:21:35.282849 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:35.282867 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:35.282929 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:35.283102 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:35.283114 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:35.283139 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:35.283203 1129259 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:35.283210 1129259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:35.283227 1129259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:35.283275 1129259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-782728 san=[127.0.0.1 192.168.50.229 localhost minikube old-k8s-version-782728]
	I0318 14:21:35.515186 1129259 provision.go:177] copyRemoteCerts
	I0318 14:21:35.515266 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:35.515318 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.517932 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518244 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.518297 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.518441 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.518653 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.518795 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.518970 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:35.607609 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:35.636141 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 14:21:35.664489 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:35.692201 1129259 provision.go:87] duration metric: took 416.891642ms to configureAuth
	I0318 14:21:35.692259 1129259 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:35.692491 1129259 config.go:182] Loaded profile config "old-k8s-version-782728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 14:21:35.692585 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.695742 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696122 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.696159 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.696325 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.696561 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696767 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.696934 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.697111 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:35.697355 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:35.697384 1129259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:35.994320 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:35.994352 1129259 machine.go:97] duration metric: took 1.093217385s to provisionDockerMachine
	I0318 14:21:35.994367 1129259 start.go:293] postStartSetup for "old-k8s-version-782728" (driver="kvm2")
	I0318 14:21:35.994383 1129259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:35.994415 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:35.994757 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:35.994799 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:35.997438 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.997814 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:35.997850 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:35.998044 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:35.998241 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:35.998437 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:35.998571 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.089357 1129259 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:36.094372 1129259 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:36.094407 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:36.094499 1129259 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:36.094617 1129259 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:36.094714 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:36.106796 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:36.135520 1129259 start.go:296] duration metric: took 141.136354ms for postStartSetup
	I0318 14:21:36.135573 1129259 fix.go:56] duration metric: took 23.385978091s for fixHost
	I0318 14:21:36.135607 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.139108 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139458 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.139491 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.139689 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.139978 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140226 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.140353 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.140528 1129259 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:36.140755 1129259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0318 14:21:36.140771 1129259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:36.252999 1128583 start.go:364] duration metric: took 57.905644198s to acquireMachinesLock for "no-preload-188109"
	I0318 14:21:36.253054 1128583 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:21:36.253063 1128583 fix.go:54] fixHost starting: 
	I0318 14:21:36.253510 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:21:36.253545 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:21:36.271856 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0318 14:21:36.272254 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:21:36.272790 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:21:36.272822 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:21:36.273237 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:21:36.273446 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:36.273614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:21:36.275414 1128583 fix.go:112] recreateIfNeeded on no-preload-188109: state=Stopped err=<nil>
	I0318 14:21:36.275440 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	W0318 14:21:36.275623 1128583 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:21:36.277528 1128583 out.go:177] * Restarting existing kvm2 VM for "no-preload-188109" ...
	I0318 14:21:31.995770 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:34.495078 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.252848 1129259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771696.238093940
	
	I0318 14:21:36.252877 1129259 fix.go:216] guest clock: 1710771696.238093940
	I0318 14:21:36.252884 1129259 fix.go:229] Guest: 2024-03-18 14:21:36.23809394 +0000 UTC Remote: 2024-03-18 14:21:36.13557956 +0000 UTC m=+255.035410784 (delta=102.51438ms)
	I0318 14:21:36.252906 1129259 fix.go:200] guest clock delta is within tolerance: 102.51438ms
	I0318 14:21:36.252911 1129259 start.go:83] releasing machines lock for "old-k8s-version-782728", held for 23.503358875s
	I0318 14:21:36.252936 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.253200 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:36.256277 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256711 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.256741 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.256901 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257487 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257702 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .DriverName
	I0318 14:21:36.257827 1129259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:36.257887 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.258009 1129259 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:36.258034 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHHostname
	I0318 14:21:36.260840 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261336 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261358 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261456 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.261692 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.261789 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:36.261818 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:36.261892 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.261982 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHPort
	I0318 14:21:36.262127 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.262173 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHKeyPath
	I0318 14:21:36.262300 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetSSHUsername
	I0318 14:21:36.262429 1129259 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/old-k8s-version-782728/id_rsa Username:docker}
	I0318 14:21:36.345131 1129259 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:36.371649 1129259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:36.524261 1129259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:36.533020 1129259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:36.533151 1129259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:36.551817 1129259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:36.551860 1129259 start.go:494] detecting cgroup driver to use...
	I0318 14:21:36.551933 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:36.575948 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:36.596748 1129259 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:36.596820 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:36.614156 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:36.630681 1129259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:36.753374 1129259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:36.944402 1129259 docker.go:233] disabling docker service ...
	I0318 14:21:36.944496 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:36.966727 1129259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:36.987565 1129259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:37.121256 1129259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:37.264652 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:37.281737 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:37.306307 1129259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 14:21:37.306374 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.318728 1129259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:37.318818 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.330587 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.343063 1129259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:37.356170 1129259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:37.369932 1129259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:37.380417 1129259 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:37.380487 1129259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:37.397409 1129259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:37.414745 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:37.571427 1129259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:37.747275 1129259 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:37.747357 1129259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:37.752838 1129259 start.go:562] Will wait 60s for crictl version
	I0318 14:21:37.752922 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:37.758286 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:37.799301 1129259 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:37.799400 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.838257 1129259 ssh_runner.go:195] Run: crio --version
	I0318 14:21:37.889692 1129259 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 14:21:35.812465 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:37.820263 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.313683 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:36.278973 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Start
	I0318 14:21:36.279160 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring networks are active...
	I0318 14:21:36.280043 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network default is active
	I0318 14:21:36.280495 1128583 main.go:141] libmachine: (no-preload-188109) Ensuring network mk-no-preload-188109 is active
	I0318 14:21:36.281014 1128583 main.go:141] libmachine: (no-preload-188109) Getting domain xml...
	I0318 14:21:36.281995 1128583 main.go:141] libmachine: (no-preload-188109) Creating domain...
	I0318 14:21:37.644409 1128583 main.go:141] libmachine: (no-preload-188109) Waiting to get IP...
	I0318 14:21:37.645406 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.645958 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.646047 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.645922 1130597 retry.go:31] will retry after 223.965782ms: waiting for machine to come up
	I0318 14:21:37.871397 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:37.871933 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:37.871971 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:37.871882 1130597 retry.go:31] will retry after 272.743353ms: waiting for machine to come up
	I0318 14:21:38.146680 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.147278 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.147309 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.147211 1130597 retry.go:31] will retry after 414.468616ms: waiting for machine to come up
	I0318 14:21:38.563199 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:38.563768 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:38.563794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:38.563718 1130597 retry.go:31] will retry after 582.588791ms: waiting for machine to come up
	I0318 14:21:39.147611 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.148410 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.148436 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.148315 1130597 retry.go:31] will retry after 686.425224ms: waiting for machine to come up
	I0318 14:21:39.836964 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:39.837647 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:39.837677 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:39.837593 1130597 retry.go:31] will retry after 878.564369ms: waiting for machine to come up
	I0318 14:21:40.717644 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:40.718346 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:40.718380 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:40.718276 1130597 retry.go:31] will retry after 1.183201382s: waiting for machine to come up
	I0318 14:21:37.891038 1129259 main.go:141] libmachine: (old-k8s-version-782728) Calling .GetIP
	I0318 14:21:37.894295 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.894865 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:bf:3d", ip: ""} in network mk-old-k8s-version-782728: {Iface:virbr1 ExpiryTime:2024-03-18 15:21:25 +0000 UTC Type:0 Mac:52:54:00:bb:bf:3d Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:old-k8s-version-782728 Clientid:01:52:54:00:bb:bf:3d}
	I0318 14:21:37.894896 1129259 main.go:141] libmachine: (old-k8s-version-782728) DBG | domain old-k8s-version-782728 has defined IP address 192.168.50.229 and MAC address 52:54:00:bb:bf:3d in network mk-old-k8s-version-782728
	I0318 14:21:37.895237 1129259 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:37.899967 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:37.916249 1129259 kubeadm.go:877] updating cluster {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:37.916384 1129259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 14:21:37.916449 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:37.974406 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:37.974492 1129259 ssh_runner.go:195] Run: which lz4
	I0318 14:21:37.979374 1129259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:21:37.984355 1129259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:21:37.984400 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 14:21:39.978421 1129259 crio.go:444] duration metric: took 1.99908094s to copy over tarball
	I0318 14:21:39.978524 1129259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:21:36.995480 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:39.005382 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:40.495300 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.495345 1128964 pod_ready.go:81] duration metric: took 12.509166884s for pod "coredns-5dd5756b68-dsrcd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.495358 1128964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504432 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.504467 1128964 pod_ready.go:81] duration metric: took 9.100778ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.504480 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515466 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.515506 1128964 pod_ready.go:81] duration metric: took 11.017212ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.515519 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525891 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.525929 1128964 pod_ready.go:81] duration metric: took 10.399892ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.525943 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534161 1128964 pod_ready.go:92] pod "kube-proxy-wbnvd" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:40.534196 1128964 pod_ready.go:81] duration metric: took 8.245545ms for pod "kube-proxy-wbnvd" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:40.534208 1128964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:42.314504 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:44.812532 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:41.902972 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:41.903707 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:41.903736 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:41.903670 1130597 retry.go:31] will retry after 1.282612289s: waiting for machine to come up
	I0318 14:21:43.188745 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:43.189303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:43.189332 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:43.189257 1130597 retry.go:31] will retry after 1.175485401s: waiting for machine to come up
	I0318 14:21:44.366602 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:44.367162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:44.367191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:44.367121 1130597 retry.go:31] will retry after 1.700678954s: waiting for machine to come up
	I0318 14:21:43.321091 1129259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342462355s)
	I0318 14:21:43.321144 1129259 crio.go:451] duration metric: took 3.342687518s to extract the tarball
	I0318 14:21:43.321155 1129259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:21:43.365776 1129259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:43.433785 1129259 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 14:21:43.433824 1129259 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:43.433900 1129259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.433929 1129259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.434017 1129259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.434032 1129259 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 14:21:43.434046 1129259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.434053 1129259 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.434305 1129259 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436059 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.436080 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.436108 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.436157 1129259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.436171 1129259 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.436220 1129259 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 14:21:43.436239 1129259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.436852 1129259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:43.592274 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.597491 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.602837 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.613030 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.613827 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.626606 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.643937 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 14:21:43.712054 1129259 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 14:21:43.712144 1129259 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.712203 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.745459 1129259 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 14:21:43.745524 1129259 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.745578 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.804000 1129259 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 14:21:43.804069 1129259 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.804132 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.818890 1129259 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 14:21:43.818946 1129259 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.818948 1129259 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 14:21:43.818984 1129259 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.818996 1129259 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 14:21:43.819000 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819013 1129259 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.819034 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819043 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819047 1129259 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 14:21:43.819079 1129259 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 14:21:43.819111 1129259 ssh_runner.go:195] Run: which crictl
	I0318 14:21:43.819145 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 14:21:43.819113 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 14:21:43.819191 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 14:21:43.900808 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 14:21:43.900881 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 14:21:43.900956 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 14:21:43.900960 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 14:21:43.901030 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 14:21:43.901092 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 14:21:43.901124 1129259 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 14:21:43.979791 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 14:21:43.999132 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 14:21:43.999189 1129259 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 14:21:44.055513 1129259 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:44.211993 1129259 cache_images.go:92] duration metric: took 778.138355ms to LoadCachedImages
	W0318 14:21:44.212165 1129259 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 14:21:44.212193 1129259 kubeadm.go:928] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0318 14:21:44.212368 1129259 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-782728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:21:44.212495 1129259 ssh_runner.go:195] Run: crio config
	I0318 14:21:44.269727 1129259 cni.go:84] Creating CNI manager for ""
	I0318 14:21:44.269766 1129259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:21:44.269785 1129259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:21:44.269814 1129259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-782728 NodeName:old-k8s-version-782728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 14:21:44.270015 1129259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-782728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:21:44.270105 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 14:21:44.282940 1129259 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:21:44.283039 1129259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:21:44.295320 1129259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 14:21:44.315686 1129259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:21:44.335233 1129259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 14:21:44.357698 1129259 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0318 14:21:44.362264 1129259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:44.377101 1129259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:44.528190 1129259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:21:44.549708 1129259 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728 for IP: 192.168.50.229
	I0318 14:21:44.549735 1129259 certs.go:194] generating shared ca certs ...
	I0318 14:21:44.549763 1129259 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:44.549989 1129259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:21:44.550058 1129259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:21:44.550074 1129259 certs.go:256] generating profile certs ...
	I0318 14:21:44.550213 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/client.key
	I0318 14:21:44.550297 1129259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key.07e4f612
	I0318 14:21:44.550356 1129259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key
	I0318 14:21:44.550551 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:21:44.550592 1129259 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:21:44.550606 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:21:44.550645 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:21:44.550677 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:21:44.550723 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:21:44.550778 1129259 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:44.551493 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:21:44.612076 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:21:44.644841 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:21:44.677687 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:21:44.719459 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 14:21:44.767865 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:21:44.816764 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:21:44.860167 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/old-k8s-version-782728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:21:44.891216 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:21:44.927632 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:21:44.965589 1129259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:21:45.002269 1129259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:21:45.025347 1129259 ssh_runner.go:195] Run: openssl version
	I0318 14:21:45.032361 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:21:45.046783 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052835 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.052942 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:21:45.060025 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:21:45.073939 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:21:45.087380 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092866 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.092945 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:21:45.099328 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:21:45.112233 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:21:45.126449 1129259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132566 1129259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.132667 1129259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:21:45.139307 1129259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:21:45.153117 1129259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:21:45.158588 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:21:45.166096 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:21:45.173537 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:21:45.181337 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:21:45.189126 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:21:45.197163 1129259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:21:45.206171 1129259 kubeadm.go:391] StartCluster: {Name:old-k8s-version-782728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-782728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:21:45.206295 1129259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:21:45.206370 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.247013 1129259 cri.go:89] found id: ""
	I0318 14:21:45.247119 1129259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:21:45.261917 1129259 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:21:45.261947 1129259 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:21:45.261955 1129259 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:21:45.262015 1129259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:21:45.276154 1129259 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:21:45.277263 1129259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-782728" does not appear in /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:21:45.277937 1129259 kubeconfig.go:62] /home/jenkins/minikube-integration/18427-1067917/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-782728" cluster setting kubeconfig missing "old-k8s-version-782728" context setting]
	I0318 14:21:45.278862 1129259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:21:45.280825 1129259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:21:45.295159 1129259 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.229
	I0318 14:21:45.295211 1129259 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:21:45.295255 1129259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:21:45.295321 1129259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:21:45.343156 1129259 cri.go:89] found id: ""
	I0318 14:21:45.343242 1129259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:21:45.361812 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:21:45.376218 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:21:45.376250 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:21:45.376314 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:21:45.386913 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:21:45.387056 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:21:45.398244 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:21:45.409397 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:21:45.409476 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:21:45.421057 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.432124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:21:45.432193 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:21:45.443793 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:21:45.454348 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:21:45.454463 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:21:45.465286 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:21:45.477199 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:45.613588 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:41.690971 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:21:41.691009 1128964 pod_ready.go:81] duration metric: took 1.156786821s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:41.691020 1128964 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	I0318 14:21:44.189110 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.201644 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.813954 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:48.817402 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:46.069196 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:46.069747 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:46.069797 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:46.069687 1130597 retry.go:31] will retry after 2.354521412s: waiting for machine to come up
	I0318 14:21:48.425714 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:48.426186 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:48.426219 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:48.426147 1130597 retry.go:31] will retry after 2.74319235s: waiting for machine to come up
	I0318 14:21:46.567767 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.838421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:46.993039 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:21:47.096766 1129259 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:21:47.096883 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:47.596963 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.097569 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.597879 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.097195 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:49.597924 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.097885 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:50.597926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:51.096984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:48.699275 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:50.699690 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.311999 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:53.811066 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:51.173264 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.173844 1128583 main.go:141] libmachine: (no-preload-188109) DBG | unable to find current IP address of domain no-preload-188109 in network mk-no-preload-188109
	I0318 14:21:51.173880 1128583 main.go:141] libmachine: (no-preload-188109) DBG | I0318 14:21:51.173784 1130597 retry.go:31] will retry after 4.489599719s: waiting for machine to come up
	I0318 14:21:55.665080 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665639 1128583 main.go:141] libmachine: (no-preload-188109) Found IP for machine: 192.168.61.40
	I0318 14:21:55.665675 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has current primary IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.665686 1128583 main.go:141] libmachine: (no-preload-188109) Reserving static IP address...
	I0318 14:21:55.666111 1128583 main.go:141] libmachine: (no-preload-188109) Reserved static IP address: 192.168.61.40
	I0318 14:21:55.666149 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.666164 1128583 main.go:141] libmachine: (no-preload-188109) Waiting for SSH to be available...
	I0318 14:21:55.666191 1128583 main.go:141] libmachine: (no-preload-188109) DBG | skip adding static IP to network mk-no-preload-188109 - found existing host DHCP lease matching {name: "no-preload-188109", mac: "52:54:00:21:62:25", ip: "192.168.61.40"}
	I0318 14:21:55.666205 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Getting to WaitForSSH function...
	I0318 14:21:55.668473 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668792 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.668837 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.668947 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH client type: external
	I0318 14:21:55.668989 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa (-rw-------)
	I0318 14:21:55.669020 1128583 main.go:141] libmachine: (no-preload-188109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:21:55.669043 1128583 main.go:141] libmachine: (no-preload-188109) DBG | About to run SSH command:
	I0318 14:21:55.669095 1128583 main.go:141] libmachine: (no-preload-188109) DBG | exit 0
	I0318 14:21:55.796228 1128583 main.go:141] libmachine: (no-preload-188109) DBG | SSH cmd err, output: <nil>: 
	I0318 14:21:55.796668 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetConfigRaw
	I0318 14:21:55.797378 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:55.800241 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.800716 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.800771 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.801150 1128583 profile.go:142] Saving config to /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/config.json ...
	I0318 14:21:55.801416 1128583 machine.go:94] provisionDockerMachine start ...
	I0318 14:21:55.801441 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:55.801690 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.804667 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:51.597867 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.097894 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.597872 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.096949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:53.597262 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.097637 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:54.597078 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.097246 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:55.597940 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:56.097312 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:52.700698 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.198658 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:55.805029 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.805269 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.806759 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.806983 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807220 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.807421 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.807623 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.807952 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.807982 1128583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:21:55.920939 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:21:55.920993 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921259 1128583 buildroot.go:166] provisioning hostname "no-preload-188109"
	I0318 14:21:55.921292 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:55.921510 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:55.924430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.924921 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:55.924962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:55.925153 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:55.925431 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925614 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:55.925792 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:55.926029 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:55.926301 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:55.926320 1128583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-188109 && echo "no-preload-188109" | sudo tee /etc/hostname
	I0318 14:21:56.051873 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-188109
	
	I0318 14:21:56.051915 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.055015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055387 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.055422 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.055659 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.055887 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056058 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.056190 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.056318 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.056508 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.056525 1128583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188109/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:21:56.178366 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:21:56.178401 1128583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18427-1067917/.minikube CaCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18427-1067917/.minikube}
	I0318 14:21:56.178443 1128583 buildroot.go:174] setting up certificates
	I0318 14:21:56.178454 1128583 provision.go:84] configureAuth start
	I0318 14:21:56.178465 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetMachineName
	I0318 14:21:56.178859 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:56.181995 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182430 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.182457 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.182724 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.185337 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185623 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.185649 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.185880 1128583 provision.go:143] copyHostCerts
	I0318 14:21:56.185968 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem, removing ...
	I0318 14:21:56.185983 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem
	I0318 14:21:56.186073 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/key.pem (1679 bytes)
	I0318 14:21:56.186249 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem, removing ...
	I0318 14:21:56.186264 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem
	I0318 14:21:56.186296 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.pem (1078 bytes)
	I0318 14:21:56.186392 1128583 exec_runner.go:144] found /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem, removing ...
	I0318 14:21:56.186406 1128583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem
	I0318 14:21:56.186432 1128583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18427-1067917/.minikube/cert.pem (1123 bytes)
	I0318 14:21:56.186511 1128583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem org=jenkins.no-preload-188109 san=[127.0.0.1 192.168.61.40 localhost minikube no-preload-188109]
	I0318 14:21:56.332196 1128583 provision.go:177] copyRemoteCerts
	I0318 14:21:56.332267 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:21:56.332295 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.335310 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335604 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.335639 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.335787 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.336002 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.336170 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.336310 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.427529 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:21:56.459132 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:21:56.488690 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:21:56.516043 1128583 provision.go:87] duration metric: took 337.568576ms to configureAuth
	I0318 14:21:56.516088 1128583 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:21:56.516309 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:21:56.516457 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.519576 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.519998 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.520059 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.520237 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.520460 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520677 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.520876 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.521065 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.521290 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.521307 1128583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:21:56.831034 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:21:56.831076 1128583 machine.go:97] duration metric: took 1.029643209s to provisionDockerMachine
	I0318 14:21:56.831092 1128583 start.go:293] postStartSetup for "no-preload-188109" (driver="kvm2")
	I0318 14:21:56.831107 1128583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:21:56.831126 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:56.831549 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:21:56.831611 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.834520 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.834962 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.834992 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.835234 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.835415 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.835582 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.835743 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:56.927694 1128583 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:21:56.932973 1128583 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:21:56.933002 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/addons for local assets ...
	I0318 14:21:56.933088 1128583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18427-1067917/.minikube/files for local assets ...
	I0318 14:21:56.933200 1128583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem -> 10752082.pem in /etc/ssl/certs
	I0318 14:21:56.933345 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:21:56.943594 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:21:56.971483 1128583 start.go:296] duration metric: took 140.368525ms for postStartSetup
	I0318 14:21:56.971564 1128583 fix.go:56] duration metric: took 20.718501273s for fixHost
	I0318 14:21:56.971618 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:56.974721 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975185 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:56.975250 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:56.975409 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:56.975679 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.975885 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:56.976049 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:56.976242 1128583 main.go:141] libmachine: Using SSH client type: native
	I0318 14:21:56.976438 1128583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.40 22 <nil> <nil>}
	I0318 14:21:56.976453 1128583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:21:57.089795 1128583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771717.066528661
	
	I0318 14:21:57.089823 1128583 fix.go:216] guest clock: 1710771717.066528661
	I0318 14:21:57.089834 1128583 fix.go:229] Guest: 2024-03-18 14:21:57.066528661 +0000 UTC Remote: 2024-03-18 14:21:56.971568576 +0000 UTC m=+361.214853207 (delta=94.960085ms)
	I0318 14:21:57.089865 1128583 fix.go:200] guest clock delta is within tolerance: 94.960085ms
	I0318 14:21:57.089873 1128583 start.go:83] releasing machines lock for "no-preload-188109", held for 20.836840869s
	I0318 14:21:57.089898 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.090297 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:57.094015 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094517 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.094563 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.094920 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095607 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095844 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:21:57.095978 1128583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:21:57.096034 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.096182 1128583 ssh_runner.go:195] Run: cat /version.json
	I0318 14:21:57.096221 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:21:57.099303 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099329 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099754 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099794 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:57.099854 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.099869 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:57.100103 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100118 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:21:57.100337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100339 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:21:57.100568 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100578 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:21:57.100766 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.100781 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:21:57.203060 1128583 ssh_runner.go:195] Run: systemctl --version
	I0318 14:21:57.209943 1128583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:21:57.368686 1128583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:21:57.376289 1128583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:21:57.376375 1128583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:21:57.394365 1128583 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:21:57.394405 1128583 start.go:494] detecting cgroup driver to use...
	I0318 14:21:57.394488 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:21:57.412172 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:21:57.428895 1128583 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:21:57.428988 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:21:57.445064 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:21:57.461255 1128583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:21:57.596381 1128583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:21:57.774782 1128583 docker.go:233] disabling docker service ...
	I0318 14:21:57.774890 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:21:57.791820 1128583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:21:57.807412 1128583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:21:57.961890 1128583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:21:58.118122 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:21:58.133994 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:21:58.155336 1128583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:21:58.155429 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.167537 1128583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:21:58.167642 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.180814 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.193997 1128583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:21:58.206817 1128583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:21:58.220843 1128583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:21:58.232012 1128583 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:21:58.232073 1128583 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:21:58.246610 1128583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:21:58.260393 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:21:58.416723 1128583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:21:58.588776 1128583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:21:58.588864 1128583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:21:58.594689 1128583 start.go:562] Will wait 60s for crictl version
	I0318 14:21:58.594787 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:58.599287 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:21:58.634954 1128583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:21:58.635059 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.667031 1128583 ssh_runner.go:195] Run: crio --version
	I0318 14:21:58.703316 1128583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:21:55.812079 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:57.813027 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.310988 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:21:58.704763 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetIP
	I0318 14:21:58.708030 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708495 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:21:58.708527 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:21:58.708738 1128583 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 14:21:58.713408 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:21:58.726934 1128583 kubeadm.go:877] updating cluster {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:21:58.727067 1128583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:21:58.727105 1128583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:21:58.764875 1128583 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:21:58.764904 1128583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 14:21:58.764976 1128583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.765019 1128583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.765091 1128583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.765117 1128583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.765142 1128583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.765158 1128583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.765125 1128583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.765098 1128583 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766495 1128583 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 14:21:58.766589 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.766592 1128583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.766702 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.766768 1128583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.766924 1128583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:58.766492 1128583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.919274 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 14:21:58.934955 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:58.945887 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:58.954907 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:58.961334 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:58.976485 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:58.991515 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.100572 1128583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 14:21:59.100624 1128583 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.100684 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.125681 1128583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 14:21:59.125740 1128583 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.125799 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.138461 1128583 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 14:21:59.138521 1128583 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.138579 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149655 1128583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 14:21:59.149697 1128583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.149734 1128583 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.149763 1128583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149803 1128583 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.149831 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 14:21:59.149839 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149790 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 14:21:59.149789 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:21:59.149875 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 14:21:59.231815 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 14:21:59.231851 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 14:21:59.231959 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:21:59.232052 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.232060 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 14:21:59.232064 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 14:21:59.231921 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.232148 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:21:59.317997 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 14:21:59.318029 1128583 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318083 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 14:21:59.318116 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318158 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318213 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:21:59.318240 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.318246 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 14:21:59.318252 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:21:59.318281 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 14:21:59.318315 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:21:59.364549 1128583 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:21:56.597953 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.098324 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.598002 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.097907 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:58.597192 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.097990 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:59.597523 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.097862 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:00.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:01.097925 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:21:57.703771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:00.200048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:02.313802 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.812944 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:03.246360 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.928017963s)
	I0318 14:22:03.246414 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246364 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.928251379s)
	I0318 14:22:03.246429 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 14:22:03.246439 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.92820974s)
	I0318 14:22:03.246454 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246468 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246415 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.928141711s)
	I0318 14:22:03.246512 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 14:22:03.246515 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 14:22:03.246516 1128583 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.88192635s)
	I0318 14:22:03.246587 1128583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 14:22:03.246641 1128583 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:03.246704 1128583 ssh_runner.go:195] Run: which crictl
	I0318 14:22:01.597799 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.097198 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.597105 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.097996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:03.597914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.097805 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:04.597949 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.097415 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:05.597222 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:06.096954 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:02.203222 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:04.699887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.813730 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.311491 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:06.317600 1128583 ssh_runner.go:235] Completed: which crictl: (3.070863461s)
	I0318 14:22:06.317700 1128583 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:22:06.317775 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.071235517s)
	I0318 14:22:06.317805 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 14:22:06.317837 1128583 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.317907 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 14:22:06.370328 1128583 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 14:22:06.370435 1128583 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.243401402s)
	I0318 14:22:08.613903 1128583 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 14:22:08.613860 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.295918452s)
	I0318 14:22:08.613917 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 14:22:08.613941 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:08.613994 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 14:22:06.597785 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.097171 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.597738 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.097476 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:08.596984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.097503 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:09.597464 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.096998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:10.597822 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.097597 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:07.199978 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:09.200394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.312752 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:13.812826 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:11.076840 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462814214s)
	I0318 14:22:11.076881 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 14:22:11.076917 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:11.076968 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 14:22:13.332851 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.25584847s)
	I0318 14:22:13.332896 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 14:22:13.332932 1128583 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:13.333002 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 14:22:14.705785 1128583 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.372744893s)
	I0318 14:22:14.705843 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 14:22:14.705881 1128583 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:14.705945 1128583 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 14:22:15.467380 1128583 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 14:22:15.467432 1128583 cache_images.go:123] Successfully loaded all cached images
	I0318 14:22:15.467439 1128583 cache_images.go:92] duration metric: took 16.702522125s to LoadCachedImages
	I0318 14:22:15.467456 1128583 kubeadm.go:928] updating node { 192.168.61.40 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:22:15.467619 1128583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-188109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:22:15.467790 1128583 ssh_runner.go:195] Run: crio config
	I0318 14:22:15.520678 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:15.520705 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:15.520718 1128583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:22:15.520740 1128583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.40 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188109 NodeName:no-preload-188109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:22:15.520893 1128583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188109"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:22:15.520965 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:22:15.534187 1128583 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:22:15.534260 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:22:15.546509 1128583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 14:22:15.567029 1128583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:22:15.586866 1128583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 14:22:15.609161 1128583 ssh_runner.go:195] Run: grep 192.168.61.40	control-plane.minikube.internal$ /etc/hosts
	I0318 14:22:15.614800 1128583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:22:15.630088 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:22:15.754729 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:22:15.774062 1128583 certs.go:68] Setting up /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109 for IP: 192.168.61.40
	I0318 14:22:15.774093 1128583 certs.go:194] generating shared ca certs ...
	I0318 14:22:15.774114 1128583 certs.go:226] acquiring lock for ca certs: {Name:mkc7781bd693a905b86d51c380d52282361680ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:22:15.774374 1128583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key
	I0318 14:22:15.774434 1128583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key
	I0318 14:22:15.774448 1128583 certs.go:256] generating profile certs ...
	I0318 14:22:15.774537 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/client.key
	I0318 14:22:15.774607 1128583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key.8d4024a9
	I0318 14:22:15.774652 1128583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key
	I0318 14:22:15.774833 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem (1338 bytes)
	W0318 14:22:15.774871 1128583 certs.go:480] ignoring /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208_empty.pem, impossibly tiny 0 bytes
	I0318 14:22:15.774882 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 14:22:15.774926 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:22:15.774972 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:22:15.775031 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/key.pem (1679 bytes)
	I0318 14:22:15.775106 1128583 certs.go:484] found cert: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem (1708 bytes)
	I0318 14:22:15.775902 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:22:11.597959 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.097914 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:12.597046 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.097863 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:13.597617 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.097268 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:14.597088 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.097142 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:15.597902 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:16.098091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:11.698561 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:14.199200 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.200026 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:16.312392 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:18.812463 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:15.821418 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:22:15.874044 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:22:15.910814 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:22:15.965889 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:22:16.001003 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:22:16.030033 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:22:16.060519 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/no-preload-188109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:22:16.089952 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/certs/1075208.pem --> /usr/share/ca-certificates/1075208.pem (1338 bytes)
	I0318 14:22:16.119397 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/ssl/certs/10752082.pem --> /usr/share/ca-certificates/10752082.pem (1708 bytes)
	I0318 14:22:16.150036 1128583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18427-1067917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:22:16.179489 1128583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:22:16.201823 1128583 ssh_runner.go:195] Run: openssl version
	I0318 14:22:16.208496 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1075208.pem && ln -fs /usr/share/ca-certificates/1075208.pem /etc/ssl/certs/1075208.pem"
	I0318 14:22:16.222723 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228161 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:54 /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.228239 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1075208.pem
	I0318 14:22:16.234994 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1075208.pem /etc/ssl/certs/51391683.0"
	I0318 14:22:16.248672 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10752082.pem && ln -fs /usr/share/ca-certificates/10752082.pem /etc/ssl/certs/10752082.pem"
	I0318 14:22:16.262626 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268255 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:54 /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.268361 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10752082.pem
	I0318 14:22:16.274868 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10752082.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:22:16.287251 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:22:16.299690 1128583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304633 1128583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.304718 1128583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:22:16.311230 1128583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:22:16.325483 1128583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:22:16.331012 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:22:16.338731 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:22:16.346289 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:22:16.353403 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:22:16.359967 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:22:16.367151 1128583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:22:16.373719 1128583 kubeadm.go:391] StartCluster: {Name:no-preload-188109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-188109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:22:16.373823 1128583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:22:16.373921 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.417874 1128583 cri.go:89] found id: ""
	I0318 14:22:16.417957 1128583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:22:16.431026 1128583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:22:16.431057 1128583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:22:16.431065 1128583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:22:16.431125 1128583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:22:16.445445 1128583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:22:16.446576 1128583 kubeconfig.go:125] found "no-preload-188109" server: "https://192.168.61.40:8443"
	I0318 14:22:16.449104 1128583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:22:16.461001 1128583 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.40
	I0318 14:22:16.461042 1128583 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:22:16.461056 1128583 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:22:16.461104 1128583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:22:16.502356 1128583 cri.go:89] found id: ""
	I0318 14:22:16.502437 1128583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:22:16.525636 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:22:16.538600 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:22:16.538626 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:22:16.538677 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:22:16.550720 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:22:16.550803 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:22:16.562585 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:22:16.573439 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:22:16.573502 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:22:16.585548 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.596619 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:22:16.596706 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:22:16.608458 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:22:16.619498 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:22:16.619587 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:22:16.631359 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:22:16.643420 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:16.765437 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:17.862932 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097434993s)
	I0318 14:22:17.862980 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.097197 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.168390 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:18.295118 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:22:18.295225 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.795897 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.295431 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.335088 1128583 api_server.go:72] duration metric: took 1.039967082s to wait for apiserver process to appear ...
	I0318 14:22:19.335128 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:22:19.335163 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:19.335912 1128583 api_server.go:269] stopped: https://192.168.61.40:8443/healthz: Get "https://192.168.61.40:8443/healthz": dial tcp 192.168.61.40:8443: connect: connection refused
	I0318 14:22:19.836266 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:16.597253 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.097759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:17.597764 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.097196 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.597181 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.097798 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:19.598008 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.097899 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:20.597717 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:21.097339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:18.699537 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:21.199910 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:22.338349 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.338383 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.338402 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.351154 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:22:22.351190 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:22:22.835446 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:22.841044 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:22.841092 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.335665 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.347092 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.347126 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:23.835731 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:23.840517 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:22:23.840559 1128583 api_server.go:103] status: https://192.168.61.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:22:24.336151 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:22:24.340981 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:22:24.354524 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:22:24.354560 1128583 api_server.go:131] duration metric: took 5.019424083s to wait for apiserver health ...
	I0318 14:22:24.354570 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:22:24.354576 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:22:24.356602 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:22:20.818751 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:23.312003 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:24.358089 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:22:24.375159 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:22:24.426409 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:22:24.452289 1128583 system_pods.go:59] 8 kube-system pods found
	I0318 14:22:24.452326 1128583 system_pods.go:61] "coredns-76f75df574-cksb5" [9cd14e15-7b0f-4978-b667-cba1a54db074] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:22:24.452333 1128583 system_pods.go:61] "etcd-no-preload-188109" [fa7d3ae7-2ac1-4275-8739-686c2e3b7569] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:22:24.452345 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [135ee544-ca83-41ab-9cb2-070587eb3b77] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:22:24.452351 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [fd91846b-6210-4cab-ae0f-5e942b4f596e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:22:24.452361 1128583 system_pods.go:61] "kube-proxy-k5kcr" [a1649d3a-9063-49c3-a8a5-04879eee108b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 14:22:24.452367 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [5bbb4165-ca8f-4807-ad01-bb35c56b6aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:22:24.452375 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-6pn6n" [004af8d8-fa8c-475c-9604-ed98ccceb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:22:24.452390 1128583 system_pods.go:61] "storage-provisioner" [45cae6ca-e3ad-4f7e-9d10-96e091160e4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:22:24.452404 1128583 system_pods.go:74] duration metric: took 25.960889ms to wait for pod list to return data ...
	I0318 14:22:24.452417 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:22:24.456337 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:22:24.456367 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:22:24.456404 1128583 node_conditions.go:105] duration metric: took 3.980296ms to run NodePressure ...
	I0318 14:22:24.456424 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:22:24.738808 1128583 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743864 1128583 kubeadm.go:733] kubelet initialised
	I0318 14:22:24.743893 1128583 kubeadm.go:734] duration metric: took 5.054661ms waiting for restarted kubelet to initialise ...
	I0318 14:22:24.743905 1128583 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:22:24.749832 1128583 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:21.597443 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.097053 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:22.597084 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.097025 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.597649 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.097040 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:24.597607 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.097886 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:25.597114 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:26.097643 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:23.700193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.198261 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:25.810553 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:27.811576 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.310813 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.757033 1128583 pod_ready.go:102] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:28.757522 1128583 pod_ready.go:92] pod "coredns-76f75df574-cksb5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:28.757562 1128583 pod_ready.go:81] duration metric: took 4.007696709s for pod "coredns-76f75df574-cksb5" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:28.757576 1128583 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:30.767877 1128583 pod_ready.go:102] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:26.597493 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.097772 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:27.597033 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.097997 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.597751 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.097139 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:29.596987 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.097453 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:30.598006 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:31.097066 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:28.199688 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:30.199994 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:32.311356 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.311807 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.265717 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:31.265745 1128583 pod_ready.go:81] duration metric: took 2.508162139s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:31.265755 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:33.273718 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:35.275477 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:31.597688 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.097887 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.597759 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.097858 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:33.597065 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.097024 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:34.597018 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.097472 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:35.597226 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.097920 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:32.200137 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:34.698589 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:36.812617 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.312289 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:37.774164 1128583 pod_ready.go:102] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.273935 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.273990 1128583 pod_ready.go:81] duration metric: took 8.008204942s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.274005 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280284 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.280313 1128583 pod_ready.go:81] duration metric: took 6.300519ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.280324 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286027 1128583 pod_ready.go:92] pod "kube-proxy-k5kcr" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.286052 1128583 pod_ready.go:81] duration metric: took 5.721757ms for pod "kube-proxy-k5kcr" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.286061 1128583 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292404 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:22:39.292450 1128583 pod_ready.go:81] duration metric: took 6.381121ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:39.292462 1128583 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	I0318 14:22:36.597756 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.097176 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:37.597091 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.097280 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:38.597026 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.097810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:39.597789 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.097897 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:40.597313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:41.096966 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:36.699760 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:39.198691 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.199259 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.812494 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:44.312890 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.300167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:43.803022 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:41.597849 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.097957 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:42.597473 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.097624 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.597810 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.098012 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:44.597317 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.097384 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:45.597816 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:46.097353 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:43.199771 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:45.698884 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.811124 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.827580 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.300768 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:48.300891 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.800442 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:46.597824 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:47.097559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:47.097660 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:47.142970 1129259 cri.go:89] found id: ""
	I0318 14:22:47.143027 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.143040 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:47.143047 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:47.143196 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:47.183530 1129259 cri.go:89] found id: ""
	I0318 14:22:47.183564 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.183573 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:47.183578 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:47.183654 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:47.226284 1129259 cri.go:89] found id: ""
	I0318 14:22:47.226317 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.226351 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:47.226359 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:47.226433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:47.272642 1129259 cri.go:89] found id: ""
	I0318 14:22:47.272684 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.272708 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:47.272725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:47.272791 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:47.318501 1129259 cri.go:89] found id: ""
	I0318 14:22:47.318547 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.318562 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:47.318571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:47.318652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:47.357743 1129259 cri.go:89] found id: ""
	I0318 14:22:47.357786 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.357801 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:47.357810 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:47.357894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:47.398516 1129259 cri.go:89] found id: ""
	I0318 14:22:47.398550 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.398563 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:47.398571 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:47.398649 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:47.443375 1129259 cri.go:89] found id: ""
	I0318 14:22:47.443413 1129259 logs.go:276] 0 containers: []
	W0318 14:22:47.443426 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:47.443439 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:47.443456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:47.512719 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:47.512773 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:47.560380 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:47.560421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:47.616159 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:47.616221 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:47.631903 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:47.631945 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:47.766159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:50.267365 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:50.287102 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:50.287169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:50.326581 1129259 cri.go:89] found id: ""
	I0318 14:22:50.326618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.326630 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:50.326638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:50.326719 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:50.366526 1129259 cri.go:89] found id: ""
	I0318 14:22:50.366563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.366577 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:50.366585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:50.366656 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:50.407884 1129259 cri.go:89] found id: ""
	I0318 14:22:50.407920 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.407932 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:50.407939 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:50.408011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:50.446932 1129259 cri.go:89] found id: ""
	I0318 14:22:50.446971 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.446982 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:50.446990 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:50.447047 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:50.490489 1129259 cri.go:89] found id: ""
	I0318 14:22:50.490529 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.490542 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:50.490552 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:50.490632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:50.531796 1129259 cri.go:89] found id: ""
	I0318 14:22:50.531876 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.531896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:50.531911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:50.532000 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:50.579429 1129259 cri.go:89] found id: ""
	I0318 14:22:50.579464 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.579473 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:50.579480 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:50.579555 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:50.617981 1129259 cri.go:89] found id: ""
	I0318 14:22:50.618053 1129259 logs.go:276] 0 containers: []
	W0318 14:22:50.618070 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:50.618086 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:50.618107 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:50.690265 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:50.690316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:50.738713 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:50.738750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:50.793127 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:50.793176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:50.809608 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:50.809645 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:50.893389 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:47.699312 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:50.199049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:51.312163 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.812711 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:52.800573 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:54.801034 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:53.394103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:53.410405 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:53.410485 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:53.451524 1129259 cri.go:89] found id: ""
	I0318 14:22:53.451563 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.451577 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:53.451585 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:53.451650 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:53.492923 1129259 cri.go:89] found id: ""
	I0318 14:22:53.492958 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.492972 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:53.492980 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:53.493053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:53.535699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.535738 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.535751 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:53.535757 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:53.535846 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:53.575766 1129259 cri.go:89] found id: ""
	I0318 14:22:53.575807 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.575818 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:53.575843 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:53.575922 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:53.613442 1129259 cri.go:89] found id: ""
	I0318 14:22:53.613473 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.613495 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:53.613502 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:53.613567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:53.655108 1129259 cri.go:89] found id: ""
	I0318 14:22:53.655141 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.655152 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:53.655160 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:53.655233 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:53.693839 1129259 cri.go:89] found id: ""
	I0318 14:22:53.693879 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.693891 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:53.693898 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:53.693971 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:53.736699 1129259 cri.go:89] found id: ""
	I0318 14:22:53.736729 1129259 logs.go:276] 0 containers: []
	W0318 14:22:53.736737 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:53.736747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:53.736759 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:53.790612 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:53.790670 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:53.806185 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:53.806226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:53.893535 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:53.893575 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:53.893593 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:53.966434 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:53.966482 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:52.698863 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:55.200175 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.311249 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:58.312362 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:57.300207 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.300788 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:56.513599 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:56.529572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:56.529652 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:56.569850 1129259 cri.go:89] found id: ""
	I0318 14:22:56.569890 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.569905 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:56.569923 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:56.570001 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:56.607508 1129259 cri.go:89] found id: ""
	I0318 14:22:56.607542 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.607554 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:56.607562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:56.607625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:56.644693 1129259 cri.go:89] found id: ""
	I0318 14:22:56.644731 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.644742 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:56.644751 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:56.644825 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:56.686265 1129259 cri.go:89] found id: ""
	I0318 14:22:56.686304 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.686316 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:56.686323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:56.686377 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:56.732519 1129259 cri.go:89] found id: ""
	I0318 14:22:56.732552 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.732559 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:56.732565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:56.732639 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:56.770015 1129259 cri.go:89] found id: ""
	I0318 14:22:56.770049 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.770059 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:56.770067 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:56.770120 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:56.813964 1129259 cri.go:89] found id: ""
	I0318 14:22:56.813993 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.814004 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:56.814012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:56.814108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:22:56.853650 1129259 cri.go:89] found id: ""
	I0318 14:22:56.853695 1129259 logs.go:276] 0 containers: []
	W0318 14:22:56.853705 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:22:56.853718 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:22:56.853735 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:22:56.911922 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:22:56.911971 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:22:56.935385 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:22:56.935415 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:22:57.040668 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:22:57.040696 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:22:57.040710 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:22:57.123258 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:22:57.123314 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:59.674542 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:22:59.688636 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:22:59.688721 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:22:59.731479 1129259 cri.go:89] found id: ""
	I0318 14:22:59.731508 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.731517 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:22:59.731523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:22:59.731599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:22:59.778127 1129259 cri.go:89] found id: ""
	I0318 14:22:59.778157 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.778169 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:22:59.778176 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:22:59.778245 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:22:59.820812 1129259 cri.go:89] found id: ""
	I0318 14:22:59.820840 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.820850 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:22:59.820856 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:22:59.820930 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:22:59.866491 1129259 cri.go:89] found id: ""
	I0318 14:22:59.866526 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.866539 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:22:59.866548 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:22:59.866614 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:22:59.907135 1129259 cri.go:89] found id: ""
	I0318 14:22:59.907173 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.907185 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:22:59.907194 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:22:59.907266 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:22:59.948578 1129259 cri.go:89] found id: ""
	I0318 14:22:59.948618 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.948627 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:22:59.948633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:22:59.948698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:22:59.986724 1129259 cri.go:89] found id: ""
	I0318 14:22:59.986749 1129259 logs.go:276] 0 containers: []
	W0318 14:22:59.986758 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:22:59.986765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:22:59.986834 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:00.031190 1129259 cri.go:89] found id: ""
	I0318 14:23:00.031223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:00.031233 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:00.031244 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:00.031260 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:00.087925 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:00.087970 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:00.104778 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:00.104810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:00.190730 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:00.190759 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:00.190775 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:00.282713 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:00.282763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:22:57.698375 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:22:59.706517 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:00.814865 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:03.312810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:01.800156 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.302577 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:02.834125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:02.852098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:02.852184 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:02.902683 1129259 cri.go:89] found id: ""
	I0318 14:23:02.902714 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.902726 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:02.902734 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:02.902844 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:02.963685 1129259 cri.go:89] found id: ""
	I0318 14:23:02.963718 1129259 logs.go:276] 0 containers: []
	W0318 14:23:02.963742 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:02.963750 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:02.963822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:03.021566 1129259 cri.go:89] found id: ""
	I0318 14:23:03.021600 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.021611 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:03.021618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:03.021689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:03.062577 1129259 cri.go:89] found id: ""
	I0318 14:23:03.062607 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.062616 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:03.062622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:03.062681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:03.101524 1129259 cri.go:89] found id: ""
	I0318 14:23:03.101554 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.101565 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:03.101573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:03.101645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:03.146253 1129259 cri.go:89] found id: ""
	I0318 14:23:03.146282 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.146294 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:03.146309 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:03.146380 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:03.189196 1129259 cri.go:89] found id: ""
	I0318 14:23:03.189230 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.189241 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:03.189250 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:03.189335 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:03.231627 1129259 cri.go:89] found id: ""
	I0318 14:23:03.231663 1129259 logs.go:276] 0 containers: []
	W0318 14:23:03.231676 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:03.231688 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:03.231719 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:03.248100 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:03.248144 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:03.325484 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:03.325509 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:03.325522 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:03.406877 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:03.406925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:03.457449 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:03.457487 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.011169 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:06.026962 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:06.027033 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:06.068556 1129259 cri.go:89] found id: ""
	I0318 14:23:06.068595 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.068606 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:06.068615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:06.068695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:06.110627 1129259 cri.go:89] found id: ""
	I0318 14:23:06.110667 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.110679 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:06.110687 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:06.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:02.198461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:04.199002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.199307 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:05.811934 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:08.312176 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:10.312721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.800938 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:09.302833 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:06.151933 1129259 cri.go:89] found id: ""
	I0318 14:23:06.152604 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.152620 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:06.152629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:06.152697 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:06.195300 1129259 cri.go:89] found id: ""
	I0318 14:23:06.195338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.195347 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:06.195353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:06.195417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:06.235155 1129259 cri.go:89] found id: ""
	I0318 14:23:06.235207 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.235220 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:06.235229 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:06.235289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:06.282729 1129259 cri.go:89] found id: ""
	I0318 14:23:06.282772 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.282785 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:06.282793 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:06.282869 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:06.323908 1129259 cri.go:89] found id: ""
	I0318 14:23:06.323940 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.323949 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:06.323955 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:06.324011 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:06.365846 1129259 cri.go:89] found id: ""
	I0318 14:23:06.365888 1129259 logs.go:276] 0 containers: []
	W0318 14:23:06.365902 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:06.365915 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:06.365934 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:06.413646 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:06.413696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:06.465648 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:06.465688 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:06.480926 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:06.480958 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:06.554929 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:06.554966 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:06.554985 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.139322 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:09.155700 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:09.155768 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:09.200557 1129259 cri.go:89] found id: ""
	I0318 14:23:09.200585 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.200593 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:09.200599 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:09.200653 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:09.239535 1129259 cri.go:89] found id: ""
	I0318 14:23:09.239573 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.239596 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:09.239613 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:09.239698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:09.279206 1129259 cri.go:89] found id: ""
	I0318 14:23:09.279240 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.279249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:09.279256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:09.279313 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:09.323928 1129259 cri.go:89] found id: ""
	I0318 14:23:09.323964 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.323977 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:09.323986 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:09.324062 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:09.365760 1129259 cri.go:89] found id: ""
	I0318 14:23:09.365796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.365807 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:09.365814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:09.365887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:09.411362 1129259 cri.go:89] found id: ""
	I0318 14:23:09.411394 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.411405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:09.411415 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:09.411508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:09.452793 1129259 cri.go:89] found id: ""
	I0318 14:23:09.452822 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.452873 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:09.452880 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:09.452939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:09.494230 1129259 cri.go:89] found id: ""
	I0318 14:23:09.494259 1129259 logs.go:276] 0 containers: []
	W0318 14:23:09.494269 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:09.494279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:09.494292 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:09.546804 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:09.546848 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:09.562509 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:09.562545 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:09.637701 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:09.637723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:09.637738 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:09.721916 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:09.721962 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:08.699862 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.199072 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.315288 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.813053 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:11.800023 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:14.300632 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:12.271942 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:12.288424 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:12.288503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:12.329950 1129259 cri.go:89] found id: ""
	I0318 14:23:12.329990 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.330004 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:12.330012 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:12.330083 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:12.368748 1129259 cri.go:89] found id: ""
	I0318 14:23:12.368798 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.368812 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:12.368821 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:12.368894 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:12.408280 1129259 cri.go:89] found id: ""
	I0318 14:23:12.408313 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.408323 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:12.408329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:12.408385 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:12.449537 1129259 cri.go:89] found id: ""
	I0318 14:23:12.449583 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.449593 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:12.449605 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:12.449661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:12.488394 1129259 cri.go:89] found id: ""
	I0318 14:23:12.488427 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.488441 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:12.488449 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:12.488528 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:12.527613 1129259 cri.go:89] found id: ""
	I0318 14:23:12.527649 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.527658 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:12.527664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:12.527716 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:12.568953 1129259 cri.go:89] found id: ""
	I0318 14:23:12.568983 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.568991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:12.568997 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:12.569051 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:12.609622 1129259 cri.go:89] found id: ""
	I0318 14:23:12.609661 1129259 logs.go:276] 0 containers: []
	W0318 14:23:12.609672 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:12.609683 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:12.609696 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:12.663119 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:12.663176 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:12.679466 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:12.679508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:12.763085 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:12.763110 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:12.763125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:12.848677 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:12.848721 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.393108 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:15.406670 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:15.406821 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:15.445518 1129259 cri.go:89] found id: ""
	I0318 14:23:15.445556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.445567 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:15.445574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:15.445632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:15.488009 1129259 cri.go:89] found id: ""
	I0318 14:23:15.488040 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.488052 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:15.488089 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:15.488160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:15.526067 1129259 cri.go:89] found id: ""
	I0318 14:23:15.526099 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.526108 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:15.526115 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:15.526185 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:15.567573 1129259 cri.go:89] found id: ""
	I0318 14:23:15.567608 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.567622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:15.567630 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:15.567701 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:15.606585 1129259 cri.go:89] found id: ""
	I0318 14:23:15.606615 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.606626 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:15.606642 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:15.606700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:15.645265 1129259 cri.go:89] found id: ""
	I0318 14:23:15.645296 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.645305 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:15.645312 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:15.645368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:15.685299 1129259 cri.go:89] found id: ""
	I0318 14:23:15.685332 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.685342 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:15.685348 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:15.685421 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:15.725781 1129259 cri.go:89] found id: ""
	I0318 14:23:15.725818 1129259 logs.go:276] 0 containers: []
	W0318 14:23:15.725832 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:15.725848 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:15.725867 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:15.769528 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:15.769568 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:15.825418 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:15.825461 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:15.842139 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:15.842173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:15.922354 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:15.922419 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:15.922438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:13.199539 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:15.700968 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:17.311266 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:19.311540 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:16.800323 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.801497 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:18.503475 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:18.518462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:18.518561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:18.559354 1129259 cri.go:89] found id: ""
	I0318 14:23:18.559392 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.559404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:18.559412 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:18.559484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:18.604455 1129259 cri.go:89] found id: ""
	I0318 14:23:18.604488 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.604500 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:18.604507 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:18.604592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:18.646032 1129259 cri.go:89] found id: ""
	I0318 14:23:18.646098 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.646110 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:18.646119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:18.646188 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:18.684752 1129259 cri.go:89] found id: ""
	I0318 14:23:18.684791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.684802 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:18.684808 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:18.684863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:18.728256 1129259 cri.go:89] found id: ""
	I0318 14:23:18.728299 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.728321 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:18.728330 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:18.728409 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:18.771335 1129259 cri.go:89] found id: ""
	I0318 14:23:18.771382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.771392 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:18.771398 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:18.771467 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:18.812273 1129259 cri.go:89] found id: ""
	I0318 14:23:18.812305 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.812318 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:18.812331 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:18.812399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:18.854901 1129259 cri.go:89] found id: ""
	I0318 14:23:18.854942 1129259 logs.go:276] 0 containers: []
	W0318 14:23:18.854957 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:18.854971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:18.854990 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:18.939982 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:18.940031 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:18.985433 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:18.985465 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:19.041353 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:19.041405 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:19.057764 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:19.057810 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:19.131974 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:18.198887 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:20.698596 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.312215 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.810513 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.299039 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:23.300143 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.798699 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:21.632395 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:21.646344 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:21.646434 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:21.687475 1129259 cri.go:89] found id: ""
	I0318 14:23:21.687526 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.687542 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:21.687553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:21.687636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:21.728684 1129259 cri.go:89] found id: ""
	I0318 14:23:21.728722 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.728734 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:21.728742 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:21.728816 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:21.772395 1129259 cri.go:89] found id: ""
	I0318 14:23:21.772436 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.772449 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:21.772457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:21.772529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:21.812758 1129259 cri.go:89] found id: ""
	I0318 14:23:21.812793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.812804 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:21.812813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:21.812878 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:21.854334 1129259 cri.go:89] found id: ""
	I0318 14:23:21.854376 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.854387 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:21.854395 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:21.854468 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:21.894237 1129259 cri.go:89] found id: ""
	I0318 14:23:21.894270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.894278 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:21.894285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:21.894339 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:21.931671 1129259 cri.go:89] found id: ""
	I0318 14:23:21.931709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.931720 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:21.931729 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:21.931795 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:21.971060 1129259 cri.go:89] found id: ""
	I0318 14:23:21.971091 1129259 logs.go:276] 0 containers: []
	W0318 14:23:21.971100 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:21.971111 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:21.971125 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:22.055070 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:22.055126 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.101854 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:22.101888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:22.157502 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:22.157550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:22.175612 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:22.175648 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:22.261607 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:24.761996 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:24.777475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:24.777545 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:24.818385 1129259 cri.go:89] found id: ""
	I0318 14:23:24.818421 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.818434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:24.818447 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:24.818508 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:24.856232 1129259 cri.go:89] found id: ""
	I0318 14:23:24.856270 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.856282 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:24.856291 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:24.856360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:24.891887 1129259 cri.go:89] found id: ""
	I0318 14:23:24.891924 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.891936 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:24.891945 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:24.892020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:24.937555 1129259 cri.go:89] found id: ""
	I0318 14:23:24.937594 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.937605 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:24.937614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:24.937689 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:24.978561 1129259 cri.go:89] found id: ""
	I0318 14:23:24.978598 1129259 logs.go:276] 0 containers: []
	W0318 14:23:24.978609 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:24.978620 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:24.978692 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:25.026398 1129259 cri.go:89] found id: ""
	I0318 14:23:25.026453 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.026462 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:25.026475 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:25.026529 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:25.063346 1129259 cri.go:89] found id: ""
	I0318 14:23:25.063382 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.063394 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:25.063403 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:25.063482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:25.106097 1129259 cri.go:89] found id: ""
	I0318 14:23:25.106135 1129259 logs.go:276] 0 containers: []
	W0318 14:23:25.106147 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:25.106160 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:25.106177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:25.162362 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:25.162412 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:25.179898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:25.179943 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:25.281856 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:25.281896 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:25.281914 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:25.371561 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:25.371605 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:22.699705 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.200662 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:25.811810 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.813013 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.311457 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.800554 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.304272 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:27.915774 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:27.931725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:27.931806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:27.971259 1129259 cri.go:89] found id: ""
	I0318 14:23:27.971297 1129259 logs.go:276] 0 containers: []
	W0318 14:23:27.971322 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:27.971340 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:27.971411 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:28.012704 1129259 cri.go:89] found id: ""
	I0318 14:23:28.012735 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.012747 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:28.012755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:28.012829 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:28.051639 1129259 cri.go:89] found id: ""
	I0318 14:23:28.051669 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.051680 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:28.051686 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:28.051753 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:28.091344 1129259 cri.go:89] found id: ""
	I0318 14:23:28.091377 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.091386 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:28.091392 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:28.091445 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:28.131190 1129259 cri.go:89] found id: ""
	I0318 14:23:28.131224 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.131237 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:28.131246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:28.131324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:28.171717 1129259 cri.go:89] found id: ""
	I0318 14:23:28.171756 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.171769 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:28.171777 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:28.171863 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:28.207812 1129259 cri.go:89] found id: ""
	I0318 14:23:28.207862 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.207874 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:28.207886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:28.207942 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:28.252721 1129259 cri.go:89] found id: ""
	I0318 14:23:28.252766 1129259 logs.go:276] 0 containers: []
	W0318 14:23:28.252779 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:28.252796 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:28.252812 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:28.311227 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:28.311278 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:28.328390 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:28.328422 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:28.413973 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:28.414005 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:28.414026 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:28.504716 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:28.504764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.049944 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:31.065402 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:31.065490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:31.110647 1129259 cri.go:89] found id: ""
	I0318 14:23:31.110675 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.110683 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:31.110690 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:31.110754 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:27.700002 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:30.200376 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.311860 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.313084 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:32.802042 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:35.299530 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:31.154046 1129259 cri.go:89] found id: ""
	I0318 14:23:31.154075 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.154084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:31.154091 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:31.154162 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:31.191863 1129259 cri.go:89] found id: ""
	I0318 14:23:31.191894 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.191904 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:31.191911 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:31.191979 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:31.234961 1129259 cri.go:89] found id: ""
	I0318 14:23:31.234993 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.235003 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:31.235011 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:31.235082 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:31.290365 1129259 cri.go:89] found id: ""
	I0318 14:23:31.290402 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.290414 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:31.290421 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:31.290516 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:31.331162 1129259 cri.go:89] found id: ""
	I0318 14:23:31.331198 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.331211 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:31.331219 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:31.331283 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:31.370382 1129259 cri.go:89] found id: ""
	I0318 14:23:31.370424 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.370436 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:31.370448 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:31.370520 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:31.409913 1129259 cri.go:89] found id: ""
	I0318 14:23:31.409948 1129259 logs.go:276] 0 containers: []
	W0318 14:23:31.409959 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:31.409971 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:31.409987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:31.493416 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:31.493456 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:31.546275 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:31.546309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:31.598580 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:31.598639 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:31.615741 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:31.615778 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:31.694159 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.194339 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:34.209763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:34.209849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:34.248405 1129259 cri.go:89] found id: ""
	I0318 14:23:34.248442 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.248456 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:34.248464 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:34.248538 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:34.290217 1129259 cri.go:89] found id: ""
	I0318 14:23:34.290249 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.290263 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:34.290270 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:34.290338 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:34.337403 1129259 cri.go:89] found id: ""
	I0318 14:23:34.337441 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.337452 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:34.337460 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:34.337533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:34.380042 1129259 cri.go:89] found id: ""
	I0318 14:23:34.380082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.380096 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:34.380105 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:34.380181 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:34.417834 1129259 cri.go:89] found id: ""
	I0318 14:23:34.417866 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.417879 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:34.417888 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:34.417960 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:34.456496 1129259 cri.go:89] found id: ""
	I0318 14:23:34.456538 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.456549 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:34.456559 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:34.456629 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:34.497772 1129259 cri.go:89] found id: ""
	I0318 14:23:34.497809 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.497822 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:34.497831 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:34.497887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:34.544757 1129259 cri.go:89] found id: ""
	I0318 14:23:34.544811 1129259 logs.go:276] 0 containers: []
	W0318 14:23:34.544825 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:34.544840 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:34.544859 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:34.602192 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:34.602237 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:34.619476 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:34.619515 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:34.695721 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:34.695761 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:34.695781 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:34.773045 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:34.773090 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:32.212811 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:34.700061 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:36.811811 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.312768 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.300434 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.300586 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:37.320468 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:37.335756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:37.335847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:37.379742 1129259 cri.go:89] found id: ""
	I0318 14:23:37.379791 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.379804 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:37.379812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:37.379898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:37.421225 1129259 cri.go:89] found id: ""
	I0318 14:23:37.421261 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.421276 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:37.421284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:37.421353 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:37.463393 1129259 cri.go:89] found id: ""
	I0318 14:23:37.463426 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.463435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:37.463441 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:37.463503 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:37.505835 1129259 cri.go:89] found id: ""
	I0318 14:23:37.505871 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.505879 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:37.505885 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:37.505951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:37.545983 1129259 cri.go:89] found id: ""
	I0318 14:23:37.546016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.546029 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:37.546037 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:37.546110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:37.585433 1129259 cri.go:89] found id: ""
	I0318 14:23:37.585466 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.585477 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:37.585486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:37.585561 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:37.622978 1129259 cri.go:89] found id: ""
	I0318 14:23:37.623016 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.623027 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:37.623034 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:37.623110 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:37.675689 1129259 cri.go:89] found id: ""
	I0318 14:23:37.675721 1129259 logs.go:276] 0 containers: []
	W0318 14:23:37.675732 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:37.675743 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:37.675763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:37.785788 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.785820 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:37.785839 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:37.870218 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:37.870261 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:37.918199 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:37.918236 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:37.975082 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:37.975135 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:40.491216 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:40.507123 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:40.507189 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:40.548763 1129259 cri.go:89] found id: ""
	I0318 14:23:40.548796 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.548806 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:40.548812 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:40.548865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:40.589821 1129259 cri.go:89] found id: ""
	I0318 14:23:40.589859 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.589872 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:40.589879 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:40.589961 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:40.629571 1129259 cri.go:89] found id: ""
	I0318 14:23:40.629603 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.629615 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:40.629622 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:40.629698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:40.668648 1129259 cri.go:89] found id: ""
	I0318 14:23:40.668682 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.668692 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:40.668719 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:40.668789 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:40.712948 1129259 cri.go:89] found id: ""
	I0318 14:23:40.713005 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.713018 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:40.713027 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:40.713103 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:40.763269 1129259 cri.go:89] found id: ""
	I0318 14:23:40.763298 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.763307 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:40.763313 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:40.763366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:40.809737 1129259 cri.go:89] found id: ""
	I0318 14:23:40.809776 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.809789 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:40.809798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:40.809873 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:40.849882 1129259 cri.go:89] found id: ""
	I0318 14:23:40.849921 1129259 logs.go:276] 0 containers: []
	W0318 14:23:40.849931 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:40.849941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:40.849961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:40.931042 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:40.931084 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:40.973246 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:40.973280 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:41.028835 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:41.028880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:41.044250 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:41.044293 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:41.116937 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:37.199672 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:39.698826 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.810759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.812721 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:41.800736 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:43.617773 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:43.635147 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:43.635216 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:43.683392 1129259 cri.go:89] found id: ""
	I0318 14:23:43.683430 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.683446 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:43.683455 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:43.683521 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:43.729761 1129259 cri.go:89] found id: ""
	I0318 14:23:43.729801 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.729813 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:43.729820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:43.729888 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:43.790694 1129259 cri.go:89] found id: ""
	I0318 14:23:43.790728 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.790741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:43.790748 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:43.790819 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:43.838506 1129259 cri.go:89] found id: ""
	I0318 14:23:43.838537 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.838548 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:43.838557 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:43.838625 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:43.879695 1129259 cri.go:89] found id: ""
	I0318 14:23:43.879725 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.879735 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:43.879743 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:43.879806 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:43.919206 1129259 cri.go:89] found id: ""
	I0318 14:23:43.919238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.919250 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:43.919258 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:43.919333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:43.966266 1129259 cri.go:89] found id: ""
	I0318 14:23:43.966308 1129259 logs.go:276] 0 containers: []
	W0318 14:23:43.966321 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:43.966329 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:43.966399 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:44.006272 1129259 cri.go:89] found id: ""
	I0318 14:23:44.006310 1129259 logs.go:276] 0 containers: []
	W0318 14:23:44.006324 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:44.006339 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:44.006358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:44.063345 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:44.063395 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:44.079323 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:44.079365 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:44.158132 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:44.158157 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:44.158177 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:44.244657 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:44.244707 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:41.707557 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:44.199509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.311703 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.811077 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.301804 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:48.800280 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.801802 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:46.791776 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:46.807457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:46.807547 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:46.849964 1129259 cri.go:89] found id: ""
	I0318 14:23:46.850003 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.850017 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:46.850025 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:46.850084 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:46.893174 1129259 cri.go:89] found id: ""
	I0318 14:23:46.893214 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.893227 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:46.893235 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:46.893314 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:46.933932 1129259 cri.go:89] found id: ""
	I0318 14:23:46.933969 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.933981 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:46.933998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:46.934075 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:46.973034 1129259 cri.go:89] found id: ""
	I0318 14:23:46.973073 1129259 logs.go:276] 0 containers: []
	W0318 14:23:46.973085 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:46.973093 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:46.973165 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:47.013465 1129259 cri.go:89] found id: ""
	I0318 14:23:47.013502 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.013515 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:47.013523 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:47.013595 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:47.050526 1129259 cri.go:89] found id: ""
	I0318 14:23:47.050556 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.050569 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:47.050583 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:47.050651 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:47.090395 1129259 cri.go:89] found id: ""
	I0318 14:23:47.090435 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.090448 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:47.090456 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:47.090533 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:47.132761 1129259 cri.go:89] found id: ""
	I0318 14:23:47.132790 1129259 logs.go:276] 0 containers: []
	W0318 14:23:47.132799 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:47.132809 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:47.132822 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:47.179035 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:47.179073 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:47.231641 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:47.231687 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:47.248134 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:47.248171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:47.330265 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:47.330294 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:47.330311 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:49.912288 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:49.927753 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:49.927842 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:49.968306 1129259 cri.go:89] found id: ""
	I0318 14:23:49.968338 1129259 logs.go:276] 0 containers: []
	W0318 14:23:49.968348 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:49.968354 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:49.968424 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:50.009781 1129259 cri.go:89] found id: ""
	I0318 14:23:50.009813 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.009821 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:50.009828 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:50.009892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:50.049203 1129259 cri.go:89] found id: ""
	I0318 14:23:50.049238 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.049249 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:50.049257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:50.049323 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:50.089679 1129259 cri.go:89] found id: ""
	I0318 14:23:50.089709 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.089719 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:50.089725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:50.089790 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:50.132352 1129259 cri.go:89] found id: ""
	I0318 14:23:50.132384 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.132395 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:50.132404 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:50.132474 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:50.169043 1129259 cri.go:89] found id: ""
	I0318 14:23:50.169076 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.169089 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:50.169098 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:50.169166 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:50.207753 1129259 cri.go:89] found id: ""
	I0318 14:23:50.207793 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.207805 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:50.207813 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:50.207898 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:50.247048 1129259 cri.go:89] found id: ""
	I0318 14:23:50.247082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:50.247093 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:50.247103 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:50.247114 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:50.299768 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:50.299816 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:50.317627 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:50.317674 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:50.393122 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:50.393152 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:50.393170 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:50.480828 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:50.480880 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:46.698786 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:49.198083 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:51.198509 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:50.812029 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.311681 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.300917 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.301653 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:53.030467 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.044538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:53.044615 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:53.082312 1129259 cri.go:89] found id: ""
	I0318 14:23:53.082351 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.082361 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:53.082370 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:53.082431 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:53.127597 1129259 cri.go:89] found id: ""
	I0318 14:23:53.127631 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.127640 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:53.127645 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:53.127708 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:53.172152 1129259 cri.go:89] found id: ""
	I0318 14:23:53.172189 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.172203 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:53.172212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:53.172295 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:53.210210 1129259 cri.go:89] found id: ""
	I0318 14:23:53.210268 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.210281 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:53.210289 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:53.210356 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:53.248963 1129259 cri.go:89] found id: ""
	I0318 14:23:53.248995 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.249004 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:53.249010 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:53.249065 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:53.287853 1129259 cri.go:89] found id: ""
	I0318 14:23:53.287886 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.287896 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:53.287903 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:53.287956 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:53.326858 1129259 cri.go:89] found id: ""
	I0318 14:23:53.326895 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.326908 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:53.326917 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:53.326987 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:53.369347 1129259 cri.go:89] found id: ""
	I0318 14:23:53.369381 1129259 logs.go:276] 0 containers: []
	W0318 14:23:53.369394 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:53.369407 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:53.369424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:53.420342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:53.420387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:53.436718 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:53.436750 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:53.517954 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:53.518018 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:53.518036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:53.597726 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:53.597782 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:56.144313 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:53.699341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.699481 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:55.810495 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.810917 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:59.812265 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:57.800712 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.300089 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:23:56.159569 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:56.159663 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:56.198525 1129259 cri.go:89] found id: ""
	I0318 14:23:56.198563 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.198575 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:56.198584 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:56.198662 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:56.242877 1129259 cri.go:89] found id: ""
	I0318 14:23:56.242913 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.242927 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:56.242942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:56.243018 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:56.282499 1129259 cri.go:89] found id: ""
	I0318 14:23:56.282531 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.282541 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:56.282547 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:56.282618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:56.321765 1129259 cri.go:89] found id: ""
	I0318 14:23:56.321810 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.321825 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:56.321833 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:56.321904 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:56.364005 1129259 cri.go:89] found id: ""
	I0318 14:23:56.364042 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.364054 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:56.364064 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:56.364138 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:56.402312 1129259 cri.go:89] found id: ""
	I0318 14:23:56.402339 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.402350 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:56.402356 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:56.402419 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:56.445638 1129259 cri.go:89] found id: ""
	I0318 14:23:56.445674 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.445686 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:56.445694 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:56.445760 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:56.488833 1129259 cri.go:89] found id: ""
	I0318 14:23:56.488870 1129259 logs.go:276] 0 containers: []
	W0318 14:23:56.488883 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:56.488896 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:56.488915 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:56.540862 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:56.540907 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:56.557124 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:56.557171 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:56.634679 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:56.634711 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:56.634727 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:56.716419 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:56.716464 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.263125 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:23:59.277619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:23:59.277703 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:23:59.318616 1129259 cri.go:89] found id: ""
	I0318 14:23:59.318648 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.318661 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:23:59.318668 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:23:59.318740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:23:59.358540 1129259 cri.go:89] found id: ""
	I0318 14:23:59.358577 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.358589 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:23:59.358597 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:23:59.358670 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:23:59.399046 1129259 cri.go:89] found id: ""
	I0318 14:23:59.399082 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.399093 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:23:59.399099 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:23:59.399169 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:23:59.439165 1129259 cri.go:89] found id: ""
	I0318 14:23:59.439223 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.439236 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:23:59.439245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:23:59.439312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:23:59.476719 1129259 cri.go:89] found id: ""
	I0318 14:23:59.476755 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.476767 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:23:59.476775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:23:59.476833 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:23:59.515847 1129259 cri.go:89] found id: ""
	I0318 14:23:59.515878 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.515888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:23:59.515895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:23:59.515966 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:23:59.560831 1129259 cri.go:89] found id: ""
	I0318 14:23:59.560861 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.560871 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:23:59.560877 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:23:59.560939 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:23:59.601176 1129259 cri.go:89] found id: ""
	I0318 14:23:59.601209 1129259 logs.go:276] 0 containers: []
	W0318 14:23:59.601219 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:23:59.601237 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:23:59.601253 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:23:59.616829 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:23:59.616862 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:23:59.695270 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:23:59.695300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:23:59.695316 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:23:59.773564 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:23:59.773610 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:23:59.819326 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:23:59.819364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:23:58.198656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:00.699394 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.311601 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.311669 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.300584 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:04.300628 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:02.372331 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:02.388245 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:02.388333 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:02.425594 1129259 cri.go:89] found id: ""
	I0318 14:24:02.425639 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.425655 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:02.425664 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:02.425740 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:02.467755 1129259 cri.go:89] found id: ""
	I0318 14:24:02.467786 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.467794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:02.467800 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:02.467890 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:02.510004 1129259 cri.go:89] found id: ""
	I0318 14:24:02.510035 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.510045 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:02.510051 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:02.510104 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:02.555590 1129259 cri.go:89] found id: ""
	I0318 14:24:02.555623 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.555632 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:02.555638 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:02.555693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:02.595096 1129259 cri.go:89] found id: ""
	I0318 14:24:02.595125 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.595135 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:02.595141 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:02.595214 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:02.639452 1129259 cri.go:89] found id: ""
	I0318 14:24:02.639482 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.639491 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:02.639498 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:02.639563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:02.677653 1129259 cri.go:89] found id: ""
	I0318 14:24:02.677684 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.677700 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:02.677706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:02.677765 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:02.714853 1129259 cri.go:89] found id: ""
	I0318 14:24:02.714885 1129259 logs.go:276] 0 containers: []
	W0318 14:24:02.714898 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:02.714909 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:02.714923 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:02.767697 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:02.767742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:02.782786 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:02.782844 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:02.868981 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:02.869020 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:02.869037 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:02.944382 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:02.944421 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.491779 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:05.507129 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:05.507213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:05.548809 1129259 cri.go:89] found id: ""
	I0318 14:24:05.548845 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.548858 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:05.548866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:05.548941 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:05.588005 1129259 cri.go:89] found id: ""
	I0318 14:24:05.588040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.588050 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:05.588056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:05.588108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:05.627670 1129259 cri.go:89] found id: ""
	I0318 14:24:05.627707 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.627720 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:05.627728 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:05.627814 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:05.666900 1129259 cri.go:89] found id: ""
	I0318 14:24:05.666936 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.666948 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:05.666957 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:05.667029 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:05.705796 1129259 cri.go:89] found id: ""
	I0318 14:24:05.705831 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.705844 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:05.705852 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:05.705923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:05.749842 1129259 cri.go:89] found id: ""
	I0318 14:24:05.749875 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.749888 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:05.749896 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:05.749981 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:05.790843 1129259 cri.go:89] found id: ""
	I0318 14:24:05.790881 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.790896 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:05.790905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:05.790992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:05.832347 1129259 cri.go:89] found id: ""
	I0318 14:24:05.832383 1129259 logs.go:276] 0 containers: []
	W0318 14:24:05.832395 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:05.832408 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:05.832424 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:05.874185 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:05.874219 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:05.929482 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:05.929534 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:05.945151 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:05.945187 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:06.024617 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:06.024644 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:06.024663 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:03.198564 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:05.198935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.811819 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.812462 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:06.300681 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.300912 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.799297 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:08.607030 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:08.622039 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:08.622140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:08.661599 1129259 cri.go:89] found id: ""
	I0318 14:24:08.661638 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.661647 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:08.661654 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:08.661728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:08.699890 1129259 cri.go:89] found id: ""
	I0318 14:24:08.699920 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.699931 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:08.699940 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:08.700009 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:08.745504 1129259 cri.go:89] found id: ""
	I0318 14:24:08.745541 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.745554 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:08.745562 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:08.745624 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:08.784162 1129259 cri.go:89] found id: ""
	I0318 14:24:08.784204 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.784217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:08.784226 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:08.784302 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:08.824197 1129259 cri.go:89] found id: ""
	I0318 14:24:08.824227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.824236 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:08.824242 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:08.824301 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:08.865096 1129259 cri.go:89] found id: ""
	I0318 14:24:08.865128 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.865137 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:08.865146 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:08.865207 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:08.905337 1129259 cri.go:89] found id: ""
	I0318 14:24:08.905371 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.905385 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:08.905393 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:08.905477 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:08.945837 1129259 cri.go:89] found id: ""
	I0318 14:24:08.945880 1129259 logs.go:276] 0 containers: []
	W0318 14:24:08.945894 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:08.945906 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:08.945925 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:09.023425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:09.023454 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:09.023473 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:09.107945 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:09.107989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:09.149742 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:09.149804 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:09.202813 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:09.202856 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:07.699433 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:10.198062 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.311072 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:13.311533 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:15.313064 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:12.799619 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.800637 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:11.720686 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:11.735125 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:11.735218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:11.772164 1129259 cri.go:89] found id: ""
	I0318 14:24:11.772198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.772210 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:11.772218 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:11.772285 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:11.811279 1129259 cri.go:89] found id: ""
	I0318 14:24:11.811309 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.811326 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:11.811334 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:11.811402 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:11.855011 1129259 cri.go:89] found id: ""
	I0318 14:24:11.855052 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.855065 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:11.855073 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:11.855146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:11.893168 1129259 cri.go:89] found id: ""
	I0318 14:24:11.893198 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.893206 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:11.893212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:11.893273 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:11.930545 1129259 cri.go:89] found id: ""
	I0318 14:24:11.930583 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.930598 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:11.930608 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:11.930680 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:11.974014 1129259 cri.go:89] found id: ""
	I0318 14:24:11.974040 1129259 logs.go:276] 0 containers: []
	W0318 14:24:11.974049 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:11.974063 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:11.974147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:12.025218 1129259 cri.go:89] found id: ""
	I0318 14:24:12.025247 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.025257 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:12.025263 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:12.025340 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:12.068361 1129259 cri.go:89] found id: ""
	I0318 14:24:12.068393 1129259 logs.go:276] 0 containers: []
	W0318 14:24:12.068406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:12.068425 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:12.068444 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:12.122840 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:12.122892 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:12.138841 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:12.138877 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:12.219567 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:12.219588 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:12.219602 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:12.307322 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:12.307368 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:14.855576 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:14.870076 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:14.870160 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:14.910346 1129259 cri.go:89] found id: ""
	I0318 14:24:14.910387 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.910399 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:14.910407 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:14.910479 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:14.957120 1129259 cri.go:89] found id: ""
	I0318 14:24:14.957151 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.957165 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:14.957170 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:14.957238 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:14.998329 1129259 cri.go:89] found id: ""
	I0318 14:24:14.998360 1129259 logs.go:276] 0 containers: []
	W0318 14:24:14.998372 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:14.998381 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:14.998450 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:15.036994 1129259 cri.go:89] found id: ""
	I0318 14:24:15.037025 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.037034 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:15.037040 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:15.037095 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:15.075241 1129259 cri.go:89] found id: ""
	I0318 14:24:15.075272 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.075282 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:15.075288 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:15.075368 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:15.114149 1129259 cri.go:89] found id: ""
	I0318 14:24:15.114199 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.114208 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:15.114215 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:15.114296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:15.155710 1129259 cri.go:89] found id: ""
	I0318 14:24:15.155745 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.155755 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:15.155762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:15.155847 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:15.196863 1129259 cri.go:89] found id: ""
	I0318 14:24:15.196899 1129259 logs.go:276] 0 containers: []
	W0318 14:24:15.196910 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:15.196928 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:15.196946 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:15.253103 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:15.253147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:15.268783 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:15.268829 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:15.352694 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:15.352723 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:15.352743 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:15.435023 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:15.435068 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:12.201234 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:14.698988 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.811663 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.812068 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:16.801294 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.301959 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:17.978170 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.994862 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:17.994929 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:18.036067 1129259 cri.go:89] found id: ""
	I0318 14:24:18.036103 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.036112 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:18.036119 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:18.036186 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:18.081249 1129259 cri.go:89] found id: ""
	I0318 14:24:18.081280 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.081291 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:18.081297 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:18.081352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:18.122336 1129259 cri.go:89] found id: ""
	I0318 14:24:18.122367 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.122376 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:18.122382 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:18.122441 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:18.163897 1129259 cri.go:89] found id: ""
	I0318 14:24:18.163931 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.163940 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:18.163949 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:18.164012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:18.206744 1129259 cri.go:89] found id: ""
	I0318 14:24:18.206781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.206792 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:18.206798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:18.206881 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:18.245738 1129259 cri.go:89] found id: ""
	I0318 14:24:18.245767 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.245778 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:18.245786 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:18.245851 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:18.285181 1129259 cri.go:89] found id: ""
	I0318 14:24:18.285211 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.285221 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:18.285228 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:18.285282 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:18.328130 1129259 cri.go:89] found id: ""
	I0318 14:24:18.328162 1129259 logs.go:276] 0 containers: []
	W0318 14:24:18.328174 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:18.328193 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:18.328210 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:18.410346 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:18.410387 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:18.467118 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:18.467154 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:18.530635 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:18.530704 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:18.549898 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:18.549952 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:18.646134 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.146368 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:17.199048 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:19.200040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:22.312401 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.812678 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.799684 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.301211 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:21.162077 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:21.162156 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:21.200211 1129259 cri.go:89] found id: ""
	I0318 14:24:21.200242 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.200251 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:21.200257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:21.200329 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:21.241228 1129259 cri.go:89] found id: ""
	I0318 14:24:21.241265 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.241277 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:21.241284 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:21.241359 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:21.278110 1129259 cri.go:89] found id: ""
	I0318 14:24:21.278147 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.278159 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:21.278167 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:21.278240 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:21.317067 1129259 cri.go:89] found id: ""
	I0318 14:24:21.317104 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.317115 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:21.317124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:21.317201 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:21.356217 1129259 cri.go:89] found id: ""
	I0318 14:24:21.356251 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.356260 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:21.356267 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:21.356326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:21.394990 1129259 cri.go:89] found id: ""
	I0318 14:24:21.395031 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.395047 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:21.395056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:21.395136 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:21.435880 1129259 cri.go:89] found id: ""
	I0318 14:24:21.435913 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.435928 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:21.435937 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:21.436023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:21.477754 1129259 cri.go:89] found id: ""
	I0318 14:24:21.477801 1129259 logs.go:276] 0 containers: []
	W0318 14:24:21.477814 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:21.477826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:21.477851 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:21.493178 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:21.493220 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:21.570200 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:21.570239 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:21.570257 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:21.658100 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:21.658147 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.703286 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:21.703327 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.266730 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:24.285544 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:24.285655 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:24.338183 1129259 cri.go:89] found id: ""
	I0318 14:24:24.338234 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.338248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:24.338256 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:24.338326 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:24.407496 1129259 cri.go:89] found id: ""
	I0318 14:24:24.407529 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.407543 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:24.407551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:24.407618 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:24.457689 1129259 cri.go:89] found id: ""
	I0318 14:24:24.457728 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.457741 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:24.457749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:24.457831 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:24.498685 1129259 cri.go:89] found id: ""
	I0318 14:24:24.498709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.498718 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:24.498725 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:24.498783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:24.537966 1129259 cri.go:89] found id: ""
	I0318 14:24:24.537999 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.538009 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:24.538016 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:24.538070 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:24.576493 1129259 cri.go:89] found id: ""
	I0318 14:24:24.576522 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.576532 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:24.576538 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:24.576592 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:24.613764 1129259 cri.go:89] found id: ""
	I0318 14:24:24.613799 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.613812 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:24.613820 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:24.613893 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:24.655862 1129259 cri.go:89] found id: ""
	I0318 14:24:24.655892 1129259 logs.go:276] 0 containers: []
	W0318 14:24:24.655906 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:24.655919 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:24.655937 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:24.710557 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:24.710604 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:24.725755 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:24.725792 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:24.805585 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:24.805616 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:24.805633 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:24.889922 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:24.889989 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:21.699674 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:24.199382 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.312672 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.315087 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:26.800594 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:29.299763 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:27.437998 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:27.454560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:27.454664 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:27.493973 1129259 cri.go:89] found id: ""
	I0318 14:24:27.494003 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.494011 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:27.494019 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:27.494078 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:27.543071 1129259 cri.go:89] found id: ""
	I0318 14:24:27.543109 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.543122 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:27.543131 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:27.543211 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:27.586163 1129259 cri.go:89] found id: ""
	I0318 14:24:27.586196 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.586212 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:27.586220 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:27.586324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:27.625233 1129259 cri.go:89] found id: ""
	I0318 14:24:27.625271 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.625284 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:27.625293 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:27.625365 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:27.663729 1129259 cri.go:89] found id: ""
	I0318 14:24:27.663772 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.663782 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:27.663798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:27.663887 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:27.702041 1129259 cri.go:89] found id: ""
	I0318 14:24:27.702072 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.702082 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:27.702090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:27.702158 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:27.745186 1129259 cri.go:89] found id: ""
	I0318 14:24:27.745216 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.745226 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:27.745233 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:27.745296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:27.786673 1129259 cri.go:89] found id: ""
	I0318 14:24:27.786709 1129259 logs.go:276] 0 containers: []
	W0318 14:24:27.786719 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:27.786729 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:27.786742 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:27.842472 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:27.842531 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:27.856985 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:27.857016 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:27.935445 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:27.935478 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:27.935496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:28.024737 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:28.024795 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:30.571003 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:30.585617 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:30.585714 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:30.628461 1129259 cri.go:89] found id: ""
	I0318 14:24:30.628488 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.628497 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:30.628503 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:30.628566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:30.674555 1129259 cri.go:89] found id: ""
	I0318 14:24:30.674595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.674610 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:30.674618 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:30.674695 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:30.714899 1129259 cri.go:89] found id: ""
	I0318 14:24:30.714950 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.714961 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:30.714970 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:30.715039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:30.756263 1129259 cri.go:89] found id: ""
	I0318 14:24:30.756295 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.756305 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:30.756311 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:30.756366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:30.795213 1129259 cri.go:89] found id: ""
	I0318 14:24:30.795244 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.795258 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:30.795265 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:30.795336 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:30.837198 1129259 cri.go:89] found id: ""
	I0318 14:24:30.837233 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.837242 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:30.837248 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:30.837306 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:30.875367 1129259 cri.go:89] found id: ""
	I0318 14:24:30.875404 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.875417 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:30.875427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:30.875510 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:30.918664 1129259 cri.go:89] found id: ""
	I0318 14:24:30.918701 1129259 logs.go:276] 0 containers: []
	W0318 14:24:30.918713 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:30.918727 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:30.918747 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:31.004325 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:31.004350 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:31.004367 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:31.093837 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:31.093882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:31.138285 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:31.138318 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:26.698769 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:28.700212 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.200571 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.811482 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.812980 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.299818 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:33.300656 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.798808 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:31.192059 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:31.192106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:33.708873 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:33.723861 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:33.723954 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:33.766843 1129259 cri.go:89] found id: ""
	I0318 14:24:33.766884 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.766899 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:33.766908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:33.766991 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:33.808273 1129259 cri.go:89] found id: ""
	I0318 14:24:33.808308 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.808319 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:33.808327 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:33.808401 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:33.847755 1129259 cri.go:89] found id: ""
	I0318 14:24:33.847789 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.847801 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:33.847823 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:33.847909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:33.888733 1129259 cri.go:89] found id: ""
	I0318 14:24:33.888785 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.888807 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:33.888817 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:33.888892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:33.927231 1129259 cri.go:89] found id: ""
	I0318 14:24:33.927281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.927294 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:33.927301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:33.927370 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:33.968573 1129259 cri.go:89] found id: ""
	I0318 14:24:33.968602 1129259 logs.go:276] 0 containers: []
	W0318 14:24:33.968612 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:33.968619 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:33.968685 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:34.019265 1129259 cri.go:89] found id: ""
	I0318 14:24:34.019298 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.019314 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:34.019321 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:34.019392 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:34.059195 1129259 cri.go:89] found id: ""
	I0318 14:24:34.059226 1129259 logs.go:276] 0 containers: []
	W0318 14:24:34.059237 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:34.059251 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:34.059268 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:34.101211 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:34.101252 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:34.154985 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:34.155029 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:34.169762 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:34.169798 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:34.247258 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:34.247289 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:34.247304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:33.698578 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.698656 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:35.814759 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:38.311080 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:40.312503 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:37.800024 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.801292 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:36.829539 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:36.844908 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:36.845003 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:36.883646 1129259 cri.go:89] found id: ""
	I0318 14:24:36.883673 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.883682 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:36.883688 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:36.883742 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:36.927651 1129259 cri.go:89] found id: ""
	I0318 14:24:36.927685 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.927700 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:36.927706 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:36.927774 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:36.972206 1129259 cri.go:89] found id: ""
	I0318 14:24:36.972243 1129259 logs.go:276] 0 containers: []
	W0318 14:24:36.972256 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:36.972264 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:36.972337 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:37.011161 1129259 cri.go:89] found id: ""
	I0318 14:24:37.011203 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.011217 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:37.011225 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:37.011293 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:37.050426 1129259 cri.go:89] found id: ""
	I0318 14:24:37.050456 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.050465 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:37.050472 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:37.050525 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:37.090240 1129259 cri.go:89] found id: ""
	I0318 14:24:37.090277 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.090288 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:37.090296 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:37.090371 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:37.138359 1129259 cri.go:89] found id: ""
	I0318 14:24:37.138392 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.138405 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:37.138414 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:37.138484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:37.175367 1129259 cri.go:89] found id: ""
	I0318 14:24:37.175397 1129259 logs.go:276] 0 containers: []
	W0318 14:24:37.175406 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:37.175419 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:37.175438 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.190633 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:37.190665 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:37.266426 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:37.266455 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:37.266474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:37.352005 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:37.352052 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:37.398004 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:37.398042 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:39.957926 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:39.972906 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:39.972994 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:40.015482 1129259 cri.go:89] found id: ""
	I0318 14:24:40.015531 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.015543 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:40.015553 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:40.015632 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:40.057869 1129259 cri.go:89] found id: ""
	I0318 14:24:40.057901 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.057913 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:40.057921 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:40.057992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:40.099638 1129259 cri.go:89] found id: ""
	I0318 14:24:40.099666 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.099676 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:40.099683 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:40.099748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:40.137566 1129259 cri.go:89] found id: ""
	I0318 14:24:40.137607 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.137619 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:40.137629 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:40.137698 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:40.178781 1129259 cri.go:89] found id: ""
	I0318 14:24:40.178816 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.178828 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:40.178835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:40.178902 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:40.221065 1129259 cri.go:89] found id: ""
	I0318 14:24:40.221106 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.221118 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:40.221135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:40.221213 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:40.262154 1129259 cri.go:89] found id: ""
	I0318 14:24:40.262193 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.262204 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:40.262212 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:40.262288 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:40.302898 1129259 cri.go:89] found id: ""
	I0318 14:24:40.302932 1129259 logs.go:276] 0 containers: []
	W0318 14:24:40.302944 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:40.302957 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:40.302973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:40.384224 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:40.384248 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:40.384270 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:40.473257 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:40.473313 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:40.513518 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:40.513571 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:40.569342 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:40.569393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:37.698736 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:39.699014 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.813028 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.814259 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:42.300121 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.802581 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:43.085260 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:43.100701 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:43.100773 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:43.141395 1129259 cri.go:89] found id: ""
	I0318 14:24:43.141441 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.141453 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:43.141462 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:43.141531 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:43.185883 1129259 cri.go:89] found id: ""
	I0318 14:24:43.185918 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.185929 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:43.185938 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:43.186012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:43.225249 1129259 cri.go:89] found id: ""
	I0318 14:24:43.225281 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.225292 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:43.225301 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:43.225375 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:43.270433 1129259 cri.go:89] found id: ""
	I0318 14:24:43.270474 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.270484 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:43.270491 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:43.270557 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:43.312947 1129259 cri.go:89] found id: ""
	I0318 14:24:43.312975 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.312986 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:43.312994 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:43.313061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:43.352095 1129259 cri.go:89] found id: ""
	I0318 14:24:43.352130 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.352144 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:43.352153 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:43.352222 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:43.394789 1129259 cri.go:89] found id: ""
	I0318 14:24:43.394820 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.394833 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:43.394840 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:43.394913 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:43.440612 1129259 cri.go:89] found id: ""
	I0318 14:24:43.440646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:43.440655 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:43.440668 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:43.440686 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:43.497257 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:43.497304 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:43.513680 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:43.513715 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:43.599437 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:43.599471 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:43.599490 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:43.681435 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:43.681480 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:42.198235 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:44.199088 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.312598 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.814542 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:47.300765 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.801469 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:46.227650 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:46.242656 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:46.242724 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:46.288400 1129259 cri.go:89] found id: ""
	I0318 14:24:46.288434 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.288448 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:46.288457 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:46.288544 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:46.327648 1129259 cri.go:89] found id: ""
	I0318 14:24:46.327691 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.327704 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:46.327712 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:46.327785 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:46.370251 1129259 cri.go:89] found id: ""
	I0318 14:24:46.370292 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.370305 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:46.370322 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:46.370404 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:46.413589 1129259 cri.go:89] found id: ""
	I0318 14:24:46.413629 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.413639 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:46.413646 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:46.413712 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:46.453557 1129259 cri.go:89] found id: ""
	I0318 14:24:46.453593 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.453606 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:46.453615 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:46.453696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:46.492502 1129259 cri.go:89] found id: ""
	I0318 14:24:46.492538 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.492552 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:46.492560 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:46.492641 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:46.534614 1129259 cri.go:89] found id: ""
	I0318 14:24:46.534646 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.534656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:46.534662 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:46.534722 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:46.576300 1129259 cri.go:89] found id: ""
	I0318 14:24:46.576331 1129259 logs.go:276] 0 containers: []
	W0318 14:24:46.576340 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:46.576351 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:46.576363 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.665281 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:46.665329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:46.712011 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:46.712050 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:46.799071 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:46.799128 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:46.814892 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:46.814921 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:46.893065 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.393340 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:49.407307 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:49.407388 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:49.449296 1129259 cri.go:89] found id: ""
	I0318 14:24:49.449330 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.449343 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:49.449351 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:49.449412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:49.489753 1129259 cri.go:89] found id: ""
	I0318 14:24:49.489781 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.489790 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:49.489796 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:49.489865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:49.533692 1129259 cri.go:89] found id: ""
	I0318 14:24:49.533740 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.533756 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:49.533765 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:49.533849 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:49.580932 1129259 cri.go:89] found id: ""
	I0318 14:24:49.580980 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.580992 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:49.581001 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:49.581090 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:49.617642 1129259 cri.go:89] found id: ""
	I0318 14:24:49.617672 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.617684 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:49.617692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:49.617758 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:49.655313 1129259 cri.go:89] found id: ""
	I0318 14:24:49.655342 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.655351 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:49.655358 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:49.655412 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:49.694613 1129259 cri.go:89] found id: ""
	I0318 14:24:49.694645 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.694656 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:49.694665 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:49.694735 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:49.736954 1129259 cri.go:89] found id: ""
	I0318 14:24:49.737005 1129259 logs.go:276] 0 containers: []
	W0318 14:24:49.737017 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:49.737030 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:49.737051 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:49.779496 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:49.779540 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:49.836505 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:49.836549 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:49.853299 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:49.853329 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:49.929231 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:49.929254 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:49.929269 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:46.699746 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:49.198789 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:51.199313 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.311753 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.311952 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.300974 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:54.301766 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:52.513104 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:52.534931 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:52.535032 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:52.578668 1129259 cri.go:89] found id: ""
	I0318 14:24:52.578706 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.578720 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:52.578731 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:52.578788 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:52.616799 1129259 cri.go:89] found id: ""
	I0318 14:24:52.616829 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.616838 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:52.616845 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:52.616909 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:52.659502 1129259 cri.go:89] found id: ""
	I0318 14:24:52.659595 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.659616 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:52.659627 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:52.659696 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:52.704402 1129259 cri.go:89] found id: ""
	I0318 14:24:52.704431 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.704439 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:52.704446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:52.704524 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:52.748018 1129259 cri.go:89] found id: ""
	I0318 14:24:52.748043 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.748052 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:52.748059 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:52.748128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:52.786901 1129259 cri.go:89] found id: ""
	I0318 14:24:52.786942 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.786956 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:52.786966 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:52.787040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:52.828259 1129259 cri.go:89] found id: ""
	I0318 14:24:52.828288 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.828298 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:52.828304 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:52.828360 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:52.867439 1129259 cri.go:89] found id: ""
	I0318 14:24:52.867470 1129259 logs.go:276] 0 containers: []
	W0318 14:24:52.867482 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:52.867495 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:52.867513 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:52.920709 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:52.920755 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:52.936596 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:52.936631 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:53.012271 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:53.012300 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:53.012315 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.092318 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:53.092358 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:55.642662 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:55.656650 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:55.656725 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:55.700050 1129259 cri.go:89] found id: ""
	I0318 14:24:55.700085 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.700099 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:55.700109 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:55.700183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:55.742561 1129259 cri.go:89] found id: ""
	I0318 14:24:55.742599 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.742608 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:55.742614 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:55.742668 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:55.780395 1129259 cri.go:89] found id: ""
	I0318 14:24:55.780427 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.780435 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:55.780442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:55.780505 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:55.819259 1129259 cri.go:89] found id: ""
	I0318 14:24:55.819291 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.819301 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:55.819310 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:55.819366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:55.859189 1129259 cri.go:89] found id: ""
	I0318 14:24:55.859227 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.859240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:55.859249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:55.859322 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:55.900012 1129259 cri.go:89] found id: ""
	I0318 14:24:55.900050 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.900062 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:55.900070 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:55.900146 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:55.936548 1129259 cri.go:89] found id: ""
	I0318 14:24:55.936578 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.936587 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:55.936595 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:55.936661 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:55.977201 1129259 cri.go:89] found id: ""
	I0318 14:24:55.977241 1129259 logs.go:276] 0 containers: []
	W0318 14:24:55.977254 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:55.977266 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:55.977281 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:56.030548 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:56.030603 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:56.047923 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:56.047959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:56.129425 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:56.129457 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:56.129474 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:53.199935 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:55.699461 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.811981 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.814200 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.799464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:58.800623 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:24:56.224109 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:56.224173 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.771513 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:24:58.786323 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:24:58.786416 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:24:58.832801 1129259 cri.go:89] found id: ""
	I0318 14:24:58.832843 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.832856 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:24:58.832868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:24:58.832945 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:24:58.873757 1129259 cri.go:89] found id: ""
	I0318 14:24:58.873792 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.873802 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:24:58.873811 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:24:58.873875 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:24:58.920727 1129259 cri.go:89] found id: ""
	I0318 14:24:58.920759 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.920769 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:24:58.920775 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:24:58.920841 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:24:58.975483 1129259 cri.go:89] found id: ""
	I0318 14:24:58.975524 1129259 logs.go:276] 0 containers: []
	W0318 14:24:58.975538 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:24:58.975549 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:24:58.975627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:24:59.027055 1129259 cri.go:89] found id: ""
	I0318 14:24:59.027092 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.027104 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:24:59.027113 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:24:59.027195 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:24:59.073394 1129259 cri.go:89] found id: ""
	I0318 14:24:59.073435 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.073457 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:24:59.073466 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:24:59.073536 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:24:59.114945 1129259 cri.go:89] found id: ""
	I0318 14:24:59.114982 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.114991 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:24:59.114998 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:24:59.115056 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:24:59.155496 1129259 cri.go:89] found id: ""
	I0318 14:24:59.155533 1129259 logs.go:276] 0 containers: []
	W0318 14:24:59.155545 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:24:59.155558 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:24:59.155574 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:24:59.214435 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:24:59.214476 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:24:59.230733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:24:59.230780 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:24:59.308976 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:24:59.309007 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:24:59.309024 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:24:59.396237 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:24:59.396287 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:24:58.198049 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:00.199613 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.312698 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.811687 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.299462 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:03.300239 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:05.301621 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:01.941736 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:01.955973 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:01.956058 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:01.995149 1129259 cri.go:89] found id: ""
	I0318 14:25:01.995187 1129259 logs.go:276] 0 containers: []
	W0318 14:25:01.995208 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:01.995217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:01.995287 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:02.036739 1129259 cri.go:89] found id: ""
	I0318 14:25:02.036780 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.036794 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:02.036804 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:02.036880 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:02.074909 1129259 cri.go:89] found id: ""
	I0318 14:25:02.074937 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.074947 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:02.074954 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:02.075039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:02.112164 1129259 cri.go:89] found id: ""
	I0318 14:25:02.112203 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.112215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:02.112223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:02.112281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:02.150756 1129259 cri.go:89] found id: ""
	I0318 14:25:02.150795 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.150808 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:02.150816 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:02.150885 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:02.194475 1129259 cri.go:89] found id: ""
	I0318 14:25:02.194511 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.194522 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:02.194531 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:02.194603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:02.237472 1129259 cri.go:89] found id: ""
	I0318 14:25:02.237499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.237508 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:02.237514 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:02.237582 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:02.278094 1129259 cri.go:89] found id: ""
	I0318 14:25:02.278136 1129259 logs.go:276] 0 containers: []
	W0318 14:25:02.278157 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:02.278171 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:02.278190 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:02.366946 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:02.367004 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.412234 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:02.412267 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:02.470036 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:02.470109 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:02.487051 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:02.487085 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:02.574515 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.074768 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:05.090386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:05.090466 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:05.131144 1129259 cri.go:89] found id: ""
	I0318 14:25:05.131180 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.131190 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:05.131198 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:05.131254 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:05.171613 1129259 cri.go:89] found id: ""
	I0318 14:25:05.171653 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.171668 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:05.171676 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:05.171748 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:05.219256 1129259 cri.go:89] found id: ""
	I0318 14:25:05.219296 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.219310 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:05.219320 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:05.219410 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:05.258580 1129259 cri.go:89] found id: ""
	I0318 14:25:05.258615 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.258625 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:05.258633 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:05.258688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:05.297198 1129259 cri.go:89] found id: ""
	I0318 14:25:05.297230 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.297240 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:05.297249 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:05.297319 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:05.341148 1129259 cri.go:89] found id: ""
	I0318 14:25:05.341184 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.341196 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:05.341205 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:05.341274 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:05.382094 1129259 cri.go:89] found id: ""
	I0318 14:25:05.382121 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.382129 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:05.382135 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:05.382199 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:05.422027 1129259 cri.go:89] found id: ""
	I0318 14:25:05.422074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:05.422083 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:05.422092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:05.422106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:05.474193 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:05.474238 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:05.490325 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:05.490364 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:05.566999 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:05.567029 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:05.567048 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:05.647205 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:05.647247 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:02.200341 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:04.698040 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:06.312239 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.811427 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:07.800597 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:10.300964 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:08.192390 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:08.207905 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:08.207992 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:08.247221 1129259 cri.go:89] found id: ""
	I0318 14:25:08.247257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.247269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:08.247278 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:08.247347 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:08.289460 1129259 cri.go:89] found id: ""
	I0318 14:25:08.289496 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.289509 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:08.289516 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:08.289601 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:08.330232 1129259 cri.go:89] found id: ""
	I0318 14:25:08.330273 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.330286 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:08.330294 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:08.330366 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:08.368035 1129259 cri.go:89] found id: ""
	I0318 14:25:08.368074 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.368086 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:08.368094 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:08.368170 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:08.413598 1129259 cri.go:89] found id: ""
	I0318 14:25:08.413631 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.413641 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:08.413647 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:08.413745 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:08.451706 1129259 cri.go:89] found id: ""
	I0318 14:25:08.451742 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.451754 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:08.451762 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:08.451856 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:08.491037 1129259 cri.go:89] found id: ""
	I0318 14:25:08.491075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.491088 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:08.491096 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:08.491175 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:08.529376 1129259 cri.go:89] found id: ""
	I0318 14:25:08.529412 1129259 logs.go:276] 0 containers: []
	W0318 14:25:08.529423 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:08.529435 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:08.529453 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:08.586539 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:08.586580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:08.602197 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:08.602226 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:08.678158 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:08.678186 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:08.678202 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:08.764272 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:08.764326 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:06.700315 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:09.198241 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.198296 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.312458 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:13.312602 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:12.799474 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:14.800216 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:11.307681 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:11.322482 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:11.322565 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:11.361333 1129259 cri.go:89] found id: ""
	I0318 14:25:11.361366 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.361378 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:11.361386 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:11.361457 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:11.399404 1129259 cri.go:89] found id: ""
	I0318 14:25:11.399444 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.399468 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:11.399486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:11.399556 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:11.438279 1129259 cri.go:89] found id: ""
	I0318 14:25:11.438324 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.438338 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:11.438350 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:11.438426 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:11.474991 1129259 cri.go:89] found id: ""
	I0318 14:25:11.475039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.475050 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:11.475058 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:11.475128 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:11.511152 1129259 cri.go:89] found id: ""
	I0318 14:25:11.511185 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.511195 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:11.511204 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:11.511271 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:11.549752 1129259 cri.go:89] found id: ""
	I0318 14:25:11.549794 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.549806 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:11.549814 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:11.549886 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:11.587089 1129259 cri.go:89] found id: ""
	I0318 14:25:11.587117 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.587135 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:11.587152 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:11.587205 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:11.621515 1129259 cri.go:89] found id: ""
	I0318 14:25:11.621547 1129259 logs.go:276] 0 containers: []
	W0318 14:25:11.621559 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:11.621574 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:11.621592 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:11.680905 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:11.680948 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:11.696472 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:11.696508 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:11.772013 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:11.772035 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:11.772054 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:11.855131 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:11.855182 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:14.396034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:14.410601 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:14.410677 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:14.449351 1129259 cri.go:89] found id: ""
	I0318 14:25:14.449392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.449404 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:14.449413 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:14.449484 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:14.488011 1129259 cri.go:89] found id: ""
	I0318 14:25:14.488039 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.488049 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:14.488055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:14.488115 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:14.529089 1129259 cri.go:89] found id: ""
	I0318 14:25:14.529128 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.529141 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:14.529148 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:14.529219 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:14.567919 1129259 cri.go:89] found id: ""
	I0318 14:25:14.567952 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.567962 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:14.567975 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:14.568039 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:14.604744 1129259 cri.go:89] found id: ""
	I0318 14:25:14.604785 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.604798 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:14.604806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:14.604872 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:14.643367 1129259 cri.go:89] found id: ""
	I0318 14:25:14.643396 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.643405 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:14.643411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:14.643473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:14.680584 1129259 cri.go:89] found id: ""
	I0318 14:25:14.680623 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.680639 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:14.680652 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:14.680726 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:14.720040 1129259 cri.go:89] found id: ""
	I0318 14:25:14.720070 1129259 logs.go:276] 0 containers: []
	W0318 14:25:14.720080 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:14.720092 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:14.720106 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:14.773483 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:14.773525 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:14.788628 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:14.788664 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:14.862912 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:14.862941 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:14.862959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:14.945001 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:14.945047 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:13.199314 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.199666 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:15.812120 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.813219 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.814195 1128788 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:16.800432 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.299589 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:17.491984 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:17.505305 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:17.505373 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:17.548465 1129259 cri.go:89] found id: ""
	I0318 14:25:17.548493 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.548501 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:17.548508 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:17.548566 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:17.590043 1129259 cri.go:89] found id: ""
	I0318 14:25:17.590075 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.590084 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:17.590090 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:17.590147 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:17.628014 1129259 cri.go:89] found id: ""
	I0318 14:25:17.628042 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.628051 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:17.628057 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:17.628108 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:17.666781 1129259 cri.go:89] found id: ""
	I0318 14:25:17.666814 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.666826 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:17.666835 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:17.666892 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:17.705989 1129259 cri.go:89] found id: ""
	I0318 14:25:17.706028 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.706048 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:17.706056 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:17.706134 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:17.743782 1129259 cri.go:89] found id: ""
	I0318 14:25:17.743815 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.743843 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:17.743853 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:17.743923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:17.787400 1129259 cri.go:89] found id: ""
	I0318 14:25:17.787431 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.787439 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:17.787446 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:17.787509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:17.825236 1129259 cri.go:89] found id: ""
	I0318 14:25:17.825270 1129259 logs.go:276] 0 containers: []
	W0318 14:25:17.825279 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:17.825291 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:17.825309 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:17.877845 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:17.877888 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:17.893733 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:17.893768 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:17.987782 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:17.987809 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:17.987845 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:18.077756 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:18.077802 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:20.625530 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:20.639692 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:20.639783 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:20.678892 1129259 cri.go:89] found id: ""
	I0318 14:25:20.678927 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.678939 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:20.678948 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:20.679020 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:20.716077 1129259 cri.go:89] found id: ""
	I0318 14:25:20.716109 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.716119 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:20.716124 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:20.716179 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:20.756708 1129259 cri.go:89] found id: ""
	I0318 14:25:20.756737 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.756748 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:20.756756 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:20.756823 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:20.793692 1129259 cri.go:89] found id: ""
	I0318 14:25:20.793728 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.793740 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:20.793749 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:20.793822 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:20.834607 1129259 cri.go:89] found id: ""
	I0318 14:25:20.834638 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.834649 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:20.834657 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:20.834728 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:20.872583 1129259 cri.go:89] found id: ""
	I0318 14:25:20.872616 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.872625 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:20.872632 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:20.872688 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:20.906061 1129259 cri.go:89] found id: ""
	I0318 14:25:20.906099 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.906112 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:20.906120 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:20.906183 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:20.942582 1129259 cri.go:89] found id: ""
	I0318 14:25:20.942612 1129259 logs.go:276] 0 containers: []
	W0318 14:25:20.942621 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:20.942632 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:20.942646 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:20.958461 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:20.958500 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:21.032841 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:21.032867 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:21.032896 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:21.110717 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:21.110764 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:17.698783 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:19.698980 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.804733 1128788 pod_ready.go:81] duration metric: took 4m0.000568505s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:21.804764 1128788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jr9wp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:21.804783 1128788 pod_ready.go:38] duration metric: took 4m13.068724908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:21.804834 1128788 kubeadm.go:591] duration metric: took 4m21.284795634s to restartPrimaryControlPlane
	W0318 14:25:21.804919 1128788 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:21.804954 1128788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:21.300889 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:23.800547 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:25.803188 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:21.160015 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:21.160055 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:23.715103 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:23.729231 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:23.729324 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:23.779123 1129259 cri.go:89] found id: ""
	I0318 14:25:23.779157 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.779166 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:23.779172 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:23.779247 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:23.820353 1129259 cri.go:89] found id: ""
	I0318 14:25:23.820397 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.820410 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:23.820427 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:23.820498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:23.857375 1129259 cri.go:89] found id: ""
	I0318 14:25:23.857405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.857416 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:23.857422 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:23.857490 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:23.895114 1129259 cri.go:89] found id: ""
	I0318 14:25:23.895153 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.895165 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:23.895173 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:23.895239 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:23.939728 1129259 cri.go:89] found id: ""
	I0318 14:25:23.939764 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.939776 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:23.939784 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:23.939866 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:23.980585 1129259 cri.go:89] found id: ""
	I0318 14:25:23.980618 1129259 logs.go:276] 0 containers: []
	W0318 14:25:23.980631 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:23.980640 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:23.980711 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:24.019562 1129259 cri.go:89] found id: ""
	I0318 14:25:24.019596 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.019604 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:24.019611 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:24.019700 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:24.069418 1129259 cri.go:89] found id: ""
	I0318 14:25:24.069455 1129259 logs.go:276] 0 containers: []
	W0318 14:25:24.069466 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:24.069478 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:24.069502 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:24.150859 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:24.150893 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:24.150913 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:24.258358 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:24.258408 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:24.304571 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:24.304609 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:24.366826 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:24.366882 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:21.699436 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:24.199193 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:28.300495 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:30.300870 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:26.886056 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:26.904239 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:26.904315 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:26.950812 1129259 cri.go:89] found id: ""
	I0318 14:25:26.950847 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.950859 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:26.950866 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:26.950957 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:26.999189 1129259 cri.go:89] found id: ""
	I0318 14:25:26.999224 1129259 logs.go:276] 0 containers: []
	W0318 14:25:26.999237 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:26.999246 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:26.999312 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:27.040452 1129259 cri.go:89] found id: ""
	I0318 14:25:27.040488 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.040499 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:27.040505 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:27.040586 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:27.078751 1129259 cri.go:89] found id: ""
	I0318 14:25:27.078782 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.078792 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:27.078798 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:27.078865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:27.116428 1129259 cri.go:89] found id: ""
	I0318 14:25:27.116465 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.116477 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:27.116486 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:27.116567 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:27.152882 1129259 cri.go:89] found id: ""
	I0318 14:25:27.152922 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.152934 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:27.152942 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:27.153023 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:27.194470 1129259 cri.go:89] found id: ""
	I0318 14:25:27.194506 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.194518 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:27.194528 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:27.194599 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:27.235910 1129259 cri.go:89] found id: ""
	I0318 14:25:27.235939 1129259 logs.go:276] 0 containers: []
	W0318 14:25:27.235948 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:27.235959 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:27.235973 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:27.302132 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:27.302189 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:27.315806 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:27.315866 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:27.398210 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:27.398240 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:27.398255 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:27.479388 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:27.479432 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:30.026721 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:30.043060 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:30.043133 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:30.083373 1129259 cri.go:89] found id: ""
	I0318 14:25:30.083405 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.083415 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:30.083423 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:30.083498 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:30.121448 1129259 cri.go:89] found id: ""
	I0318 14:25:30.121485 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.121498 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:30.121506 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:30.121587 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:30.160527 1129259 cri.go:89] found id: ""
	I0318 14:25:30.160557 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.160566 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:30.160574 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:30.160636 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:30.199812 1129259 cri.go:89] found id: ""
	I0318 14:25:30.199870 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.199884 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:30.199895 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:30.199970 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:30.242922 1129259 cri.go:89] found id: ""
	I0318 14:25:30.242959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.242971 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:30.242983 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:30.243053 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:30.280918 1129259 cri.go:89] found id: ""
	I0318 14:25:30.280949 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.280962 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:30.280968 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:30.281021 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:30.319928 1129259 cri.go:89] found id: ""
	I0318 14:25:30.319959 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.319968 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:30.319974 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:30.320040 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:30.363693 1129259 cri.go:89] found id: ""
	I0318 14:25:30.363723 1129259 logs.go:276] 0 containers: []
	W0318 14:25:30.363733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:30.363744 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:30.363757 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:30.419559 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:30.419608 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:30.435030 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:30.435078 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:30.514849 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:30.514885 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:30.514903 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:30.601660 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:30.601711 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:26.700384 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:29.203012 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:32.800506 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:35.299464 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.150817 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:33.165959 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:33.166045 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:33.205823 1129259 cri.go:89] found id: ""
	I0318 14:25:33.205862 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.205874 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:33.205884 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:33.205951 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:33.267817 1129259 cri.go:89] found id: ""
	I0318 14:25:33.267865 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.267878 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:33.267886 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:33.267977 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:33.309310 1129259 cri.go:89] found id: ""
	I0318 14:25:33.309338 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.309346 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:33.309353 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:33.309417 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:33.350169 1129259 cri.go:89] found id: ""
	I0318 14:25:33.350202 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.350215 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:33.350223 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:33.350289 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:33.391919 1129259 cri.go:89] found id: ""
	I0318 14:25:33.391961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.391973 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:33.391981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:33.392049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:33.433001 1129259 cri.go:89] found id: ""
	I0318 14:25:33.433056 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.433069 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:33.433078 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:33.433150 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:33.474482 1129259 cri.go:89] found id: ""
	I0318 14:25:33.474513 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.474533 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:33.474542 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:33.474603 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:33.512280 1129259 cri.go:89] found id: ""
	I0318 14:25:33.512314 1129259 logs.go:276] 0 containers: []
	W0318 14:25:33.512323 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:33.512333 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:33.512347 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:33.593336 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:33.593378 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:33.636001 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:33.636038 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:33.688881 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:33.688922 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:33.704549 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:33.704580 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:33.779659 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:31.698372 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:33.699450 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.199443 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:37.299695 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:39.800741 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:36.280240 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:36.295566 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:36.295646 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:36.336195 1129259 cri.go:89] found id: ""
	I0318 14:25:36.336235 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.336248 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:36.336257 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:36.336334 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:36.378038 1129259 cri.go:89] found id: ""
	I0318 14:25:36.378084 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.378099 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:36.378110 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:36.378191 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:36.425389 1129259 cri.go:89] found id: ""
	I0318 14:25:36.425433 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.425446 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:36.425453 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:36.425512 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:36.464639 1129259 cri.go:89] found id: ""
	I0318 14:25:36.464683 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.464749 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:36.464763 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:36.464828 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:36.509515 1129259 cri.go:89] found id: ""
	I0318 14:25:36.509550 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.509563 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:36.509573 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:36.509645 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:36.554761 1129259 cri.go:89] found id: ""
	I0318 14:25:36.554789 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.554800 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:36.554806 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:36.554859 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:36.593817 1129259 cri.go:89] found id: ""
	I0318 14:25:36.593852 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.593861 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:36.593868 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:36.593923 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:36.634005 1129259 cri.go:89] found id: ""
	I0318 14:25:36.634038 1129259 logs.go:276] 0 containers: []
	W0318 14:25:36.634050 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:36.634063 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:36.634081 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:36.687869 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:36.687910 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:36.704507 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:36.704550 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:36.785201 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:36.785257 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:36.785275 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:36.866058 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:36.866104 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:39.409796 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:39.426897 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:39.426972 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:39.472221 1129259 cri.go:89] found id: ""
	I0318 14:25:39.472257 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.472269 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:39.472285 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:39.472352 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:39.513920 1129259 cri.go:89] found id: ""
	I0318 14:25:39.513961 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.513974 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:39.513981 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:39.514049 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:39.555502 1129259 cri.go:89] found id: ""
	I0318 14:25:39.555538 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.555552 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:39.555565 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:39.555627 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:39.601583 1129259 cri.go:89] found id: ""
	I0318 14:25:39.601614 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.601622 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:39.601628 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:39.601693 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:39.648429 1129259 cri.go:89] found id: ""
	I0318 14:25:39.648464 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.648473 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:39.648488 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:39.648564 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:39.698498 1129259 cri.go:89] found id: ""
	I0318 14:25:39.698531 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.698543 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:39.698551 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:39.698617 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:39.751350 1129259 cri.go:89] found id: ""
	I0318 14:25:39.751392 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.751403 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:39.751411 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:39.751482 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:39.801912 1129259 cri.go:89] found id: ""
	I0318 14:25:39.801944 1129259 logs.go:276] 0 containers: []
	W0318 14:25:39.801956 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:39.801968 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:39.801987 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:39.816041 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:39.816076 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:39.899569 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:39.899599 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:39.899621 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:39.980913 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:39.980961 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:40.026279 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:40.026319 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:38.199879 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:40.698620 1128964 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:41.801098 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:44.301379 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:42.585034 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:42.601055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:42.601161 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:42.652386 1129259 cri.go:89] found id: ""
	I0318 14:25:42.652422 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.652434 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:42.652442 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:42.652517 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:42.703304 1129259 cri.go:89] found id: ""
	I0318 14:25:42.703341 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.703353 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:42.703361 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:42.703433 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:42.747938 1129259 cri.go:89] found id: ""
	I0318 14:25:42.747972 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.747983 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:42.747992 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:42.748061 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:42.793889 1129259 cri.go:89] found id: ""
	I0318 14:25:42.793923 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.793934 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:42.793943 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:42.794012 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:42.837991 1129259 cri.go:89] found id: ""
	I0318 14:25:42.838096 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.838124 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:42.838143 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:42.838225 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:42.881892 1129259 cri.go:89] found id: ""
	I0318 14:25:42.882011 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.882036 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:42.882055 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:42.882140 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:42.921175 1129259 cri.go:89] found id: ""
	I0318 14:25:42.921217 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.921229 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:42.921238 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:42.921310 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:42.966634 1129259 cri.go:89] found id: ""
	I0318 14:25:42.966674 1129259 logs.go:276] 0 containers: []
	W0318 14:25:42.966687 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:42.966702 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:42.966720 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:42.982243 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:42.982290 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:43.082154 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:43.082187 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:43.082205 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:43.175904 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:43.175953 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:43.220128 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:43.220224 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:45.785917 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:45.801648 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:25:45.801736 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:25:45.842731 1129259 cri.go:89] found id: ""
	I0318 14:25:45.842769 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.842782 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:25:45.842797 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:25:45.842858 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:25:45.887726 1129259 cri.go:89] found id: ""
	I0318 14:25:45.887771 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.887783 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:25:45.887792 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:25:45.887900 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:25:45.929349 1129259 cri.go:89] found id: ""
	I0318 14:25:45.929384 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.929395 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:25:45.929401 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:25:45.929473 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:25:45.971540 1129259 cri.go:89] found id: ""
	I0318 14:25:45.971582 1129259 logs.go:276] 0 containers: []
	W0318 14:25:45.971595 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:25:45.971604 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:25:45.971681 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:25:46.012461 1129259 cri.go:89] found id: ""
	I0318 14:25:46.012499 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.012521 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:25:46.012530 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:25:46.012607 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:25:46.057527 1129259 cri.go:89] found id: ""
	I0318 14:25:46.057556 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.057566 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:25:46.057572 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:25:46.057628 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:25:46.101115 1129259 cri.go:89] found id: ""
	I0318 14:25:46.101146 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.101156 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:25:46.101163 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:25:46.101218 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:25:46.144690 1129259 cri.go:89] found id: ""
	I0318 14:25:46.144722 1129259 logs.go:276] 0 containers: []
	W0318 14:25:46.144733 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:25:46.144747 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:25:46.144763 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:25:41.692077 1128964 pod_ready.go:81] duration metric: took 4m0.00104s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" ...
	E0318 14:25:41.692109 1128964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-4vrvb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:25:41.692136 1128964 pod_ready.go:38] duration metric: took 4m13.711186182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:25:41.692170 1128964 kubeadm.go:591] duration metric: took 4m21.341445822s to restartPrimaryControlPlane
	W0318 14:25:41.692279 1128964 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:41.692345 1128964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:46.800687 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:49.300012 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:46.198508 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:25:46.198552 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:25:46.213920 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:25:46.213959 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:25:46.307837 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:25:46.307870 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:25:46.307884 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:25:46.393348 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:25:46.393393 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 14:25:48.947758 1129259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:25:48.963529 1129259 kubeadm.go:591] duration metric: took 4m3.701563316s to restartPrimaryControlPlane
	W0318 14:25:48.963609 1129259 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:25:48.963632 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:25:50.782362 1129259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.818697959s)
	I0318 14:25:50.782464 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:50.798866 1129259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:50.810841 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:50.822394 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:50.822417 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:50.822464 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:50.833695 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:50.833763 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:50.845393 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:50.856807 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:50.856882 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:50.868756 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.879442 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:50.879517 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:50.890725 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:50.901505 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:50.901576 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:50.912911 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:50.994085 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:25:50.994244 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:51.166111 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:51.166240 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:51.166390 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:51.374393 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:51.376093 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:51.376230 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:51.376323 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:51.376464 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:51.376538 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:51.376620 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:51.376715 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:51.376821 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:51.376930 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:51.377042 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:51.377141 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:51.377202 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:51.377292 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:51.485218 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:51.556003 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:51.865954 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:52.103582 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:52.120863 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:52.122310 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:52.122433 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:52.280292 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:54.173048 1128788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.368065771s)
	I0318 14:25:54.173145 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:25:54.192139 1128788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:25:54.204909 1128788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:25:54.217096 1128788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:25:54.217126 1128788 kubeadm.go:156] found existing configuration files:
	
	I0318 14:25:54.217182 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:25:54.227905 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:25:54.228009 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:25:54.239854 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:25:54.250668 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:25:54.250744 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:25:54.263509 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.274202 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:25:54.274265 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:25:54.285342 1128788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:25:54.296064 1128788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:25:54.296157 1128788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:25:54.307985 1128788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:25:54.371118 1128788 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:25:54.371202 1128788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:25:54.551187 1128788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:25:54.551377 1128788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:25:54.551551 1128788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:25:54.780034 1128788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:25:54.782426 1128788 out.go:204]   - Generating certificates and keys ...
	I0318 14:25:54.782545 1128788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:25:54.782650 1128788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:25:54.782735 1128788 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:25:54.782829 1128788 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:25:54.782930 1128788 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:25:54.783213 1128788 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:25:54.783717 1128788 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:25:54.784390 1128788 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:25:54.784849 1128788 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:25:54.785263 1128788 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:25:54.785725 1128788 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:25:54.785826 1128788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:25:55.130998 1128788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:25:55.387076 1128788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:25:55.517240 1128788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:25:51.300209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:53.303010 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.800703 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:25:55.906565 1128788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:25:55.907198 1128788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:55.909674 1128788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:25:52.282451 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:25:52.282559 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:52.289015 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:52.290093 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:52.290987 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:25:52.293794 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:55.912196 1128788 out.go:204]   - Booting up control plane ...
	I0318 14:25:55.912323 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:25:55.912407 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:25:55.912494 1128788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:25:55.932596 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:25:55.935171 1128788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:25:55.935520 1128788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:25:56.083395 1128788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:25:58.300288 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:00.800291 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:02.086878 1128788 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002842 seconds
	I0318 14:26:02.087052 1128788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:02.102499 1128788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:02.637889 1128788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:02.638152 1128788 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-767719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:03.157386 1128788 kubeadm.go:309] [bootstrap-token] Using token: do2whq.efhsaljmpmqgv9gj
	I0318 14:26:03.159248 1128788 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:03.159429 1128788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:03.167328 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:03.180628 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:03.185253 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:03.190014 1128788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:03.202714 1128788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:03.223282 1128788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:03.504303 1128788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:03.614837 1128788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:03.614872 1128788 kubeadm.go:309] 
	I0318 14:26:03.614978 1128788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:03.615004 1128788 kubeadm.go:309] 
	I0318 14:26:03.615107 1128788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:03.615117 1128788 kubeadm.go:309] 
	I0318 14:26:03.615149 1128788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:03.615219 1128788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:03.615285 1128788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:03.615293 1128788 kubeadm.go:309] 
	I0318 14:26:03.615354 1128788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:03.615365 1128788 kubeadm.go:309] 
	I0318 14:26:03.615421 1128788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:03.615430 1128788 kubeadm.go:309] 
	I0318 14:26:03.615486 1128788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:03.615578 1128788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:03.615669 1128788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:03.615679 1128788 kubeadm.go:309] 
	I0318 14:26:03.615778 1128788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:03.615887 1128788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:03.615897 1128788 kubeadm.go:309] 
	I0318 14:26:03.615998 1128788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616120 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:03.616149 1128788 kubeadm.go:309] 	--control-plane 
	I0318 14:26:03.616159 1128788 kubeadm.go:309] 
	I0318 14:26:03.616266 1128788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:03.616276 1128788 kubeadm.go:309] 
	I0318 14:26:03.616371 1128788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token do2whq.efhsaljmpmqgv9gj \
	I0318 14:26:03.616500 1128788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:03.617330 1128788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:03.617374 1128788 cni.go:84] Creating CNI manager for ""
	I0318 14:26:03.617384 1128788 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:03.619394 1128788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:03.620836 1128788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:03.665582 1128788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:03.812834 1128788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:03.812897 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:03.812943 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-767719 minikube.k8s.io/updated_at=2024_03_18T14_26_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=embed-certs-767719 minikube.k8s.io/primary=true
	I0318 14:26:03.899419 1128788 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:04.104407 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:04.604499 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.104532 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:05.605047 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:02.800707 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:04.802167 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:06.105187 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:06.604462 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.104411 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.605096 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.104448 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:08.604430 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.104707 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:09.605130 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.104955 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:10.605165 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:07.300575 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:09.798776 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:11.104436 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.605273 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.104851 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:12.604819 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.104669 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:13.605089 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.105486 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:14.604568 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.104455 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:15.604422 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:11.799935 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:13.800907 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:15.801754 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:16.105107 1128788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:16.205506 1128788 kubeadm.go:1107] duration metric: took 12.39266353s to wait for elevateKubeSystemPrivileges
	W0318 14:26:16.205558 1128788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:16.205570 1128788 kubeadm.go:393] duration metric: took 5m15.738081871s to StartCluster
	I0318 14:26:16.205599 1128788 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.205720 1128788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:16.208645 1128788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:16.209157 1128788 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:16.210915 1128788 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:16.209206 1128788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:16.209401 1128788 config.go:182] Loaded profile config "embed-certs-767719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:16.212258 1128788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:16.212275 1128788 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767719"
	I0318 14:26:16.212351 1128788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767719"
	I0318 14:26:16.212260 1128788 addons.go:69] Setting metrics-server=true in profile "embed-certs-767719"
	I0318 14:26:16.212415 1128788 addons.go:234] Setting addon metrics-server=true in "embed-certs-767719"
	W0318 14:26:16.212431 1128788 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:16.212469 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212260 1128788 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767719"
	I0318 14:26:16.212512 1128788 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767719"
	W0318 14:26:16.212527 1128788 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:16.212560 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.212983 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.212947 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213003 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.213028 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.213040 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.231532 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0318 14:26:16.231543 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0318 14:26:16.232128 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0318 14:26:16.232280 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232284 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.232882 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.232907 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.232922 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.233258 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233284 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.233360 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.233479 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.233501 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.235956 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236151 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.236372 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236411 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.236545 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.236568 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.240163 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.244336 1128788 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767719"
	W0318 14:26:16.244370 1128788 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:16.244407 1128788 host.go:66] Checking if "embed-certs-767719" exists ...
	I0318 14:26:16.244845 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.244894 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.257940 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0318 14:26:16.258701 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.259359 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.259386 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.259769 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.260030 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.262272 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.262286 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0318 14:26:16.264459 1128788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:16.262834 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.265430 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0318 14:26:16.266198 1128788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.266220 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:16.266240 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.266482 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.266663 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.266676 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267253 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.267277 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.267753 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.268456 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.268605 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.269068 1128788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:16.269098 1128788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:16.269804 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270398 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.270420 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.270711 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.270989 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.271183 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.271362 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.271984 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.273854 1128788 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:14.305258 1128964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.612890386s)
	I0318 14:26:14.305324 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:14.325572 1128964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:26:14.337875 1128964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:26:14.350490 1128964 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:26:14.350530 1128964 kubeadm.go:156] found existing configuration files:
	
	I0318 14:26:14.350592 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 14:26:14.361521 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:26:14.361612 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:26:14.372767 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 14:26:14.383545 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:26:14.383614 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:26:14.394057 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.404187 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:26:14.404261 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:26:14.415029 1128964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 14:26:14.425738 1128964 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:26:14.425820 1128964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:26:14.436847 1128964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:26:14.674909 1128964 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:26:16.275278 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:16.275298 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:16.275323 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.278500 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.278909 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.278939 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.279230 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.279437 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.279612 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.279748 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.286716 1128788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0318 14:26:16.287176 1128788 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:16.287651 1128788 main.go:141] libmachine: Using API Version  1
	I0318 14:26:16.287678 1128788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:16.288057 1128788 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:16.288248 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetState
	I0318 14:26:16.290084 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .DriverName
	I0318 14:26:16.290359 1128788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.290381 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:16.290404 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHHostname
	I0318 14:26:16.293253 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293662 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:ad:e4", ip: ""} in network mk-embed-certs-767719: {Iface:virbr4 ExpiryTime:2024-03-18 15:12:13 +0000 UTC Type:0 Mac:52:54:00:86:ad:e4 Iaid: IPaddr:192.168.72.45 Prefix:24 Hostname:embed-certs-767719 Clientid:01:52:54:00:86:ad:e4}
	I0318 14:26:16.293688 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | domain embed-certs-767719 has defined IP address 192.168.72.45 and MAC address 52:54:00:86:ad:e4 in network mk-embed-certs-767719
	I0318 14:26:16.293886 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHPort
	I0318 14:26:16.294078 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHKeyPath
	I0318 14:26:16.294241 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .GetSSHUsername
	I0318 14:26:16.294398 1128788 sshutil.go:53] new ssh client: &{IP:192.168.72.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/embed-certs-767719/id_rsa Username:docker}
	I0318 14:26:16.460832 1128788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:16.537089 1128788 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550362 1128788 node_ready.go:49] node "embed-certs-767719" has status "Ready":"True"
	I0318 14:26:16.550391 1128788 node_ready.go:38] duration metric: took 13.195546ms for node "embed-certs-767719" to be "Ready" ...
	I0318 14:26:16.550405 1128788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:16.557745 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:16.638531 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:16.638565 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:16.664638 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:16.762661 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:16.762713 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:16.792712 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:16.859169 1128788 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:16.859200 1128788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:16.954827 1128788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:18.103559 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.103592 1128788 pod_ready.go:81] duration metric: took 1.545818643s for pod "coredns-5dd5756b68-4knv5" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.103606 1128788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.256039 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591350359s)
	I0318 14:26:18.256112 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256129 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256483 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256513 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256530 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.256528 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.256541 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.256918 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.256936 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.256950 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.264761 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.264788 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.265133 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.265164 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.265193 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.652953 1128788 pod_ready.go:92] pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.653088 1128788 pod_ready.go:81] duration metric: took 549.466665ms for pod "coredns-5dd5756b68-fm52r" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.653124 1128788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674506 1128788 pod_ready.go:92] pod "etcd-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.674553 1128788 pod_ready.go:81] duration metric: took 21.386005ms for pod "etcd-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.674568 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.680422 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.887663901s)
	I0318 14:26:18.680486 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680498 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.680875 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.680887 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.680903 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.680921 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.680928 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.681198 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.681199 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.681277 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.711919 1128788 pod_ready.go:92] pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.711954 1128788 pod_ready.go:81] duration metric: took 37.376915ms for pod "kube-apiserver-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.711968 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730096 1128788 pod_ready.go:92] pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.730129 1128788 pod_ready.go:81] duration metric: took 18.151839ms for pod "kube-controller-manager-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.730145 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.756000 1128788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.801120989s)
	I0318 14:26:18.756076 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756091 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756416 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756435 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756445 1128788 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:18.756452 1128788 main.go:141] libmachine: (embed-certs-767719) Calling .Close
	I0318 14:26:18.756849 1128788 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:18.756883 1128788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:18.756895 1128788 addons.go:470] Verifying addon metrics-server=true in "embed-certs-767719"
	I0318 14:26:18.756917 1128788 main.go:141] libmachine: (embed-certs-767719) DBG | Closing plugin on server side
	I0318 14:26:18.759019 1128788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 14:26:18.760442 1128788 addons.go:505] duration metric: took 2.551236037s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 14:26:18.942164 1128788 pod_ready.go:92] pod "kube-proxy-f4547" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:18.942196 1128788 pod_ready.go:81] duration metric: took 212.040337ms for pod "kube-proxy-f4547" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:18.942205 1128788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341772 1128788 pod_ready.go:92] pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:19.341808 1128788 pod_ready.go:81] duration metric: took 399.594033ms for pod "kube-scheduler-embed-certs-767719" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:19.341820 1128788 pod_ready.go:38] duration metric: took 2.791403027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:19.341841 1128788 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:19.341921 1128788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:19.362110 1128788 api_server.go:72] duration metric: took 3.152894755s to wait for apiserver process to appear ...
	I0318 14:26:19.362150 1128788 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:19.362209 1128788 api_server.go:253] Checking apiserver healthz at https://192.168.72.45:8443/healthz ...
	I0318 14:26:19.368138 1128788 api_server.go:279] https://192.168.72.45:8443/healthz returned 200:
	ok
	I0318 14:26:19.369583 1128788 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:19.369608 1128788 api_server.go:131] duration metric: took 7.450993ms to wait for apiserver health ...
	I0318 14:26:19.369617 1128788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:19.545388 1128788 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:19.545423 1128788 system_pods.go:61] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.545428 1128788 system_pods.go:61] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.545431 1128788 system_pods.go:61] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.545434 1128788 system_pods.go:61] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.545438 1128788 system_pods.go:61] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.545441 1128788 system_pods.go:61] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.545443 1128788 system_pods.go:61] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.545449 1128788 system_pods.go:61] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.545455 1128788 system_pods.go:61] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.545464 1128788 system_pods.go:74] duration metric: took 175.840386ms to wait for pod list to return data ...
	I0318 14:26:19.545473 1128788 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:19.741364 1128788 default_sa.go:45] found service account: "default"
	I0318 14:26:19.741405 1128788 default_sa.go:55] duration metric: took 195.920075ms for default service account to be created ...
	I0318 14:26:19.741424 1128788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:19.945000 1128788 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:19.945039 1128788 system_pods.go:89] "coredns-5dd5756b68-4knv5" [b2afcd2a-a9a3-494b-8f2b-c532cd60a569] Running
	I0318 14:26:19.945047 1128788 system_pods.go:89] "coredns-5dd5756b68-fm52r" [48d62bd5-d44a-4de7-a73c-7cd615b34470] Running
	I0318 14:26:19.945053 1128788 system_pods.go:89] "etcd-embed-certs-767719" [4805a956-3e9f-47f3-9d0e-da781b62ac67] Running
	I0318 14:26:19.945060 1128788 system_pods.go:89] "kube-apiserver-embed-certs-767719" [11c9c470-e6a1-4a18-8337-5d04bf0e711d] Running
	I0318 14:26:19.945066 1128788 system_pods.go:89] "kube-controller-manager-embed-certs-767719" [c089e625-537e-4657-8428-d2e81c78f926] Running
	I0318 14:26:19.945070 1128788 system_pods.go:89] "kube-proxy-f4547" [90d43cdd-0e1d-4158-9403-91bb7b556f70] Running
	I0318 14:26:19.945076 1128788 system_pods.go:89] "kube-scheduler-embed-certs-767719" [3dc26f31-eb17-43f1-ab3c-0788c9f145f0] Running
	I0318 14:26:19.945087 1128788 system_pods.go:89] "metrics-server-57f55c9bc5-w8z6p" [e4621ef8-7807-48ba-a57c-d5804dbfb784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:19.945097 1128788 system_pods.go:89] "storage-provisioner" [3aaa79fa-95b2-40d3-af0c-db60292f77e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 14:26:19.945110 1128788 system_pods.go:126] duration metric: took 203.67742ms to wait for k8s-apps to be running ...
	I0318 14:26:19.945122 1128788 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:19.945188 1128788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:19.987286 1128788 system_svc.go:56] duration metric: took 42.149434ms WaitForService to wait for kubelet
	I0318 14:26:19.987328 1128788 kubeadm.go:576] duration metric: took 3.778120092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:19.987361 1128788 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:20.141763 1128788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:20.141803 1128788 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:20.141822 1128788 node_conditions.go:105] duration metric: took 154.45408ms to run NodePressure ...
	I0318 14:26:20.141840 1128788 start.go:240] waiting for startup goroutines ...
	I0318 14:26:20.141851 1128788 start.go:245] waiting for cluster config update ...
	I0318 14:26:20.141867 1128788 start.go:254] writing updated cluster config ...
	I0318 14:26:20.142268 1128788 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:20.206832 1128788 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:20.209057 1128788 out.go:177] * Done! kubectl is now configured to use "embed-certs-767719" cluster and "default" namespace by default
	I0318 14:26:18.302228 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:20.799704 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.444912 1128964 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:26:23.444993 1128964 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:26:23.445098 1128964 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:26:23.445212 1128964 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:26:23.445359 1128964 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:26:23.445461 1128964 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:26:23.446790 1128964 out.go:204]   - Generating certificates and keys ...
	I0318 14:26:23.446904 1128964 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:26:23.446986 1128964 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:26:23.447102 1128964 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:26:23.447194 1128964 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:26:23.447309 1128964 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:26:23.447376 1128964 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:26:23.447453 1128964 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:26:23.447529 1128964 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:26:23.447607 1128964 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:26:23.447693 1128964 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:26:23.447741 1128964 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:26:23.447856 1128964 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:26:23.447937 1128964 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:26:23.448019 1128964 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:26:23.448121 1128964 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:26:23.448194 1128964 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:26:23.448311 1128964 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:26:23.448422 1128964 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:26:23.450038 1128964 out.go:204]   - Booting up control plane ...
	I0318 14:26:23.450174 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:26:23.450282 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:26:23.450371 1128964 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:26:23.450509 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:26:23.450633 1128964 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:26:23.450671 1128964 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:26:23.450818 1128964 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:26:23.450887 1128964 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.005932 seconds
	I0318 14:26:23.450974 1128964 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:26:23.451093 1128964 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:26:23.451143 1128964 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:26:23.451340 1128964 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-075922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:26:23.451414 1128964 kubeadm.go:309] [bootstrap-token] Using token: k51w96.h8xduusjdfbez3gf
	I0318 14:26:23.452848 1128964 out.go:204]   - Configuring RBAC rules ...
	I0318 14:26:23.452964 1128964 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:26:23.453073 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:26:23.453269 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:26:23.453499 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:26:23.453664 1128964 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:26:23.453785 1128964 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:26:23.453940 1128964 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:26:23.454005 1128964 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:26:23.454074 1128964 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:26:23.454084 1128964 kubeadm.go:309] 
	I0318 14:26:23.454172 1128964 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:26:23.454186 1128964 kubeadm.go:309] 
	I0318 14:26:23.454288 1128964 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:26:23.454298 1128964 kubeadm.go:309] 
	I0318 14:26:23.454335 1128964 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:26:23.454412 1128964 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:26:23.454475 1128964 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:26:23.454484 1128964 kubeadm.go:309] 
	I0318 14:26:23.454528 1128964 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:26:23.454538 1128964 kubeadm.go:309] 
	I0318 14:26:23.454592 1128964 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:26:23.454599 1128964 kubeadm.go:309] 
	I0318 14:26:23.454681 1128964 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:26:23.454804 1128964 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:26:23.454907 1128964 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:26:23.454919 1128964 kubeadm.go:309] 
	I0318 14:26:23.455027 1128964 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:26:23.455146 1128964 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:26:23.455157 1128964 kubeadm.go:309] 
	I0318 14:26:23.455264 1128964 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455401 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:26:23.455433 1128964 kubeadm.go:309] 	--control-plane 
	I0318 14:26:23.455441 1128964 kubeadm.go:309] 
	I0318 14:26:23.455551 1128964 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:26:23.455560 1128964 kubeadm.go:309] 
	I0318 14:26:23.455666 1128964 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token k51w96.h8xduusjdfbez3gf \
	I0318 14:26:23.455814 1128964 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:26:23.455838 1128964 cni.go:84] Creating CNI manager for ""
	I0318 14:26:23.455849 1128964 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:26:23.457678 1128964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:26:22.801209 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:25.305096 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:23.459285 1128964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:26:23.475803 1128964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:26:23.515652 1128964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:23.515772 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-075922 minikube.k8s.io/updated_at=2024_03_18T14_26_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=default-k8s-diff-port-075922 minikube.k8s.io/primary=true
	I0318 14:26:23.796828 1128964 ops.go:34] apiserver oom_adj: -16
	I0318 14:26:23.796947 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.296970 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:24.797728 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.297564 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:25.797144 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:26.297056 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.800960 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:29.802967 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:26.798004 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.297935 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:27.797550 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.297031 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:28.797624 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.297549 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:29.797256 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.297964 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:30.797927 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:31.297742 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.300787 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:34.800941 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:31.797040 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.297155 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:32.797371 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.297809 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:33.797723 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.297045 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:34.797008 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.297030 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.797767 1128964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:26:35.895914 1128964 kubeadm.go:1107] duration metric: took 12.380212538s to wait for elevateKubeSystemPrivileges
	W0318 14:26:35.895975 1128964 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:26:35.895987 1128964 kubeadm.go:393] duration metric: took 5m15.606276512s to StartCluster
	I0318 14:26:35.896013 1128964 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.896123 1128964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:26:35.898023 1128964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:26:35.898324 1128964 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.39 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:26:35.900235 1128964 out.go:177] * Verifying Kubernetes components...
	I0318 14:26:35.898415 1128964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:26:35.898550 1128964 config.go:182] Loaded profile config "default-k8s-diff-port-075922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:26:35.901588 1128964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:26:35.901599 1128964 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901617 1128964 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901640 1128964 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901650 1128964 addons.go:243] addon metrics-server should already be in state true
	I0318 14:26:35.901665 1128964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-075922"
	I0318 14:26:35.901588 1128964 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-075922"
	I0318 14:26:35.901698 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.901723 1128964 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.901735 1128964 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:26:35.901764 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.902055 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902088 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902097 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902126 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.902130 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.902169 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.919538 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0318 14:26:35.920140 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.920836 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.920864 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.921282 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.921940 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.921983 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.923313 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0318 14:26:35.923321 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0318 14:26:35.923742 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.923792 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.924263 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924280 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924381 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.924395 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.924710 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924733 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.924893 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.925215 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.925235 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.928021 1128964 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-075922"
	W0318 14:26:35.928047 1128964 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:26:35.928081 1128964 host.go:66] Checking if "default-k8s-diff-port-075922" exists ...
	I0318 14:26:35.928422 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.928449 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.941908 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0318 14:26:35.942465 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.943114 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.943146 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.943757 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.943991 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.944493 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 14:26:35.944874 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.945387 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.945404 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.945865 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.945988 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.948302 1128964 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:26:35.946821 1128964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:26:35.947744 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 14:26:35.950087 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:26:35.950110 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:26:35.950135 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.950181 1128964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:26:35.950664 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.951258 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.951295 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.951755 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.952146 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.953842 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954331 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.954353 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.954360 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.954563 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.956253 1128964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:26:35.954739 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:32.294235 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:26:32.295514 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:32.295750 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:35.956487 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.957743 1128964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:35.957764 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:26:35.957783 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.957864 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.960451 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.960896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.960929 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.961107 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.961281 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.961435 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.961565 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:35.968795 1128964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0318 14:26:35.969191 1128964 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:26:35.969631 1128964 main.go:141] libmachine: Using API Version  1
	I0318 14:26:35.969646 1128964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:26:35.969955 1128964 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:26:35.970117 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetState
	I0318 14:26:35.971799 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .DriverName
	I0318 14:26:35.972169 1128964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:35.972188 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:26:35.972206 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHHostname
	I0318 14:26:35.974906 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975268 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:53:d5", ip: ""} in network mk-default-k8s-diff-port-075922: {Iface:virbr5 ExpiryTime:2024-03-18 15:21:05 +0000 UTC Type:0 Mac:52:54:00:c5:53:d5 Iaid: IPaddr:192.168.83.39 Prefix:24 Hostname:default-k8s-diff-port-075922 Clientid:01:52:54:00:c5:53:d5}
	I0318 14:26:35.975301 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | domain default-k8s-diff-port-075922 has defined IP address 192.168.83.39 and MAC address 52:54:00:c5:53:d5 in network mk-default-k8s-diff-port-075922
	I0318 14:26:35.975551 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHPort
	I0318 14:26:35.975767 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHKeyPath
	I0318 14:26:35.975958 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .GetSSHUsername
	I0318 14:26:35.976137 1128964 sshutil.go:53] new ssh client: &{IP:192.168.83.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/default-k8s-diff-port-075922/id_rsa Username:docker}
	I0318 14:26:36.122420 1128964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:26:36.139655 1128964 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160857 1128964 node_ready.go:49] node "default-k8s-diff-port-075922" has status "Ready":"True"
	I0318 14:26:36.160883 1128964 node_ready.go:38] duration metric: took 21.193343ms for node "default-k8s-diff-port-075922" to be "Ready" ...
	I0318 14:26:36.160893 1128964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:36.176832 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:36.240357 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:26:36.240385 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:26:36.261620 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:26:36.279644 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:26:36.294510 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:26:36.294546 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:26:36.374231 1128964 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:36.376166 1128964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:26:36.419045 1128964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:26:38.032072 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752379015s)
	I0318 14:26:38.032148 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032161 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032374 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.770714521s)
	I0318 14:26:38.032416 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032427 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032623 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032652 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032660 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032683 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032698 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.032796 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.032814 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.032817 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.032835 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.032848 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.033046 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033107 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033173 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.033149 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.033259 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.033284 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.112866 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.112896 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.113337 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.113362 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.113384 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176199 1128964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.757085355s)
	I0318 14:26:38.176281 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176302 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176669 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) DBG | Closing plugin on server side
	I0318 14:26:38.176683 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176697 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176707 1128964 main.go:141] libmachine: Making call to close driver server
	I0318 14:26:38.176716 1128964 main.go:141] libmachine: (default-k8s-diff-port-075922) Calling .Close
	I0318 14:26:38.176955 1128964 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:26:38.176969 1128964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:26:38.176980 1128964 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-075922"
	I0318 14:26:38.178714 1128964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:26:37.300219 1128583 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:39.293136 1128583 pod_ready.go:81] duration metric: took 4m0.000606722s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" ...
	E0318 14:26:39.293173 1128583 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6pn6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 14:26:39.293203 1128583 pod_ready.go:38] duration metric: took 4m14.549283732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:39.293239 1128583 kubeadm.go:591] duration metric: took 4m22.862167815s to restartPrimaryControlPlane
	W0318 14:26:39.293320 1128583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 14:26:39.293362 1128583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:26:37.296327 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:37.296642 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:26:38.180451 1128964 addons.go:505] duration metric: took 2.282033093s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:26:38.194239 1128964 pod_ready.go:102] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"False"
	I0318 14:26:40.186091 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.186125 1128964 pod_ready.go:81] duration metric: took 4.009253844s for pod "coredns-5dd5756b68-c8q9g" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.186139 1128964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193026 1128964 pod_ready.go:92] pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.193059 1128964 pod_ready.go:81] duration metric: took 6.912513ms for pod "coredns-5dd5756b68-zqnfs" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.193069 1128964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199244 1128964 pod_ready.go:92] pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.199272 1128964 pod_ready.go:81] duration metric: took 6.195834ms for pod "etcd-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.199283 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.204991 1128964 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.205019 1128964 pod_ready.go:81] duration metric: took 5.728459ms for pod "kube-apiserver-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.205034 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214706 1128964 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.214730 1128964 pod_ready.go:81] duration metric: took 9.687528ms for pod "kube-controller-manager-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.214739 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.581970 1128964 pod_ready.go:92] pod "kube-proxy-bzwvf" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.582045 1128964 pod_ready.go:81] duration metric: took 367.297496ms for pod "kube-proxy-bzwvf" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.582059 1128964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981562 1128964 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace has status "Ready":"True"
	I0318 14:26:40.981592 1128964 pod_ready.go:81] duration metric: took 399.525488ms for pod "kube-scheduler-default-k8s-diff-port-075922" in "kube-system" namespace to be "Ready" ...
	I0318 14:26:40.981601 1128964 pod_ready.go:38] duration metric: took 4.820697544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:26:40.981618 1128964 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:26:40.981676 1128964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:26:40.998626 1128964 api_server.go:72] duration metric: took 5.100242538s to wait for apiserver process to appear ...
	I0318 14:26:40.998672 1128964 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:26:40.998703 1128964 api_server.go:253] Checking apiserver healthz at https://192.168.83.39:8444/healthz ...
	I0318 14:26:41.010986 1128964 api_server.go:279] https://192.168.83.39:8444/healthz returned 200:
	ok
	I0318 14:26:41.012714 1128964 api_server.go:141] control plane version: v1.28.4
	I0318 14:26:41.012742 1128964 api_server.go:131] duration metric: took 14.061953ms to wait for apiserver health ...
	I0318 14:26:41.012750 1128964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:26:41.186873 1128964 system_pods.go:59] 9 kube-system pods found
	I0318 14:26:41.186910 1128964 system_pods.go:61] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.186917 1128964 system_pods.go:61] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.186922 1128964 system_pods.go:61] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.186935 1128964 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.186943 1128964 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.186948 1128964 system_pods.go:61] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.186953 1128964 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.187013 1128964 system_pods.go:61] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.187029 1128964 system_pods.go:61] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.187041 1128964 system_pods.go:74] duration metric: took 174.283401ms to wait for pod list to return data ...
	I0318 14:26:41.187054 1128964 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:26:41.381195 1128964 default_sa.go:45] found service account: "default"
	I0318 14:26:41.381238 1128964 default_sa.go:55] duration metric: took 194.17219ms for default service account to be created ...
	I0318 14:26:41.381252 1128964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:26:41.584896 1128964 system_pods.go:86] 9 kube-system pods found
	I0318 14:26:41.584934 1128964 system_pods.go:89] "coredns-5dd5756b68-c8q9g" [207d4899-9bf3-4f4b-ab21-bc35079a0bda] Running
	I0318 14:26:41.584940 1128964 system_pods.go:89] "coredns-5dd5756b68-zqnfs" [2603cb56-7d34-4a9e-8614-9d4f4610da6d] Running
	I0318 14:26:41.584945 1128964 system_pods.go:89] "etcd-default-k8s-diff-port-075922" [aa58502a-a6e9-46d6-b513-b6cbbc2184d7] Running
	I0318 14:26:41.584952 1128964 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-075922" [85d99637-efe3-4110-bf19-63f18f94f233] Running
	I0318 14:26:41.584957 1128964 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-075922" [765676e2-279e-488c-88eb-c613b63a4bdd] Running
	I0318 14:26:41.584961 1128964 system_pods.go:89] "kube-proxy-bzwvf" [f52bafde-a25e-4496-a987-42d88c036982] Running
	I0318 14:26:41.584965 1128964 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-075922" [fa56e248-ec70-4b0c-b1d6-6a5578aff510] Running
	I0318 14:26:41.584974 1128964 system_pods.go:89] "metrics-server-57f55c9bc5-7c444" [a04f0648-aa96-4119-b6e8-b981ac4e054f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:26:41.584980 1128964 system_pods.go:89] "storage-provisioner" [a8954270-a7e4-4584-860f-eea1ffd428c5] Running
	I0318 14:26:41.584996 1128964 system_pods.go:126] duration metric: took 203.730421ms to wait for k8s-apps to be running ...
	I0318 14:26:41.585011 1128964 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:26:41.585065 1128964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:26:41.602211 1128964 system_svc.go:56] duration metric: took 17.185915ms WaitForService to wait for kubelet
	I0318 14:26:41.602253 1128964 kubeadm.go:576] duration metric: took 5.703881545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:26:41.602283 1128964 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:26:41.781292 1128964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:26:41.781321 1128964 node_conditions.go:123] node cpu capacity is 2
	I0318 14:26:41.781333 1128964 node_conditions.go:105] duration metric: took 179.044515ms to run NodePressure ...
	I0318 14:26:41.781345 1128964 start.go:240] waiting for startup goroutines ...
	I0318 14:26:41.781352 1128964 start.go:245] waiting for cluster config update ...
	I0318 14:26:41.781363 1128964 start.go:254] writing updated cluster config ...
	I0318 14:26:41.781670 1128964 ssh_runner.go:195] Run: rm -f paused
	I0318 14:26:41.845950 1128964 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 14:26:41.848522 1128964 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-075922" cluster and "default" namespace by default
	I0318 14:26:47.296738 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:26:47.296974 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:07.297620 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:07.297848 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:11.668940 1128583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.375539998s)
	I0318 14:27:11.669036 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:11.687767 1128583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:27:11.699135 1128583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:11.710896 1128583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:11.710924 1128583 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:11.710971 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:11.721562 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:11.721638 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:11.733335 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:11.744643 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:11.744724 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:11.755801 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.766424 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:11.766515 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:11.777734 1128583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:11.788887 1128583 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:11.788972 1128583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:11.800792 1128583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:11.858933 1128583 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:27:11.859030 1128583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:27:12.029485 1128583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:27:12.029703 1128583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:27:12.029833 1128583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:27:12.279174 1128583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:27:12.281285 1128583 out.go:204]   - Generating certificates and keys ...
	I0318 14:27:12.281400 1128583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:27:12.281507 1128583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:27:12.281633 1128583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:27:12.281726 1128583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:27:12.281844 1128583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:27:12.281938 1128583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:27:12.282031 1128583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:27:12.282121 1128583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:27:12.282218 1128583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:27:12.282325 1128583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:27:12.282392 1128583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:27:12.282470 1128583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:27:12.605106 1128583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:27:12.950706 1128583 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 14:27:13.067948 1128583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:27:13.340677 1128583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:27:13.393147 1128583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:27:13.393891 1128583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:27:13.396474 1128583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:27:13.398563 1128583 out.go:204]   - Booting up control plane ...
	I0318 14:27:13.398698 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:27:13.398814 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:27:13.398900 1128583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:27:13.422155 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:27:13.423529 1128583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:27:13.423626 1128583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:27:13.568295 1128583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:27:19.571958 1128583 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003509 seconds
	I0318 14:27:19.587644 1128583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:27:19.607417 1128583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:27:20.153253 1128583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:27:20.153526 1128583 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-188109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:27:20.671613 1128583 kubeadm.go:309] [bootstrap-token] Using token: oq5d1l.24j9td8ex727h998
	I0318 14:27:20.673250 1128583 out.go:204]   - Configuring RBAC rules ...
	I0318 14:27:20.673402 1128583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:27:20.680765 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:27:20.693884 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:27:20.698696 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:27:20.702572 1128583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:27:20.710027 1128583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:27:20.725068 1128583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:27:20.981178 1128583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:27:21.104335 1128583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:27:21.107428 1128583 kubeadm.go:309] 
	I0318 14:27:21.107550 1128583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:27:21.107596 1128583 kubeadm.go:309] 
	I0318 14:27:21.107725 1128583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:27:21.107750 1128583 kubeadm.go:309] 
	I0318 14:27:21.107796 1128583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:27:21.107894 1128583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:27:21.107995 1128583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:27:21.108030 1128583 kubeadm.go:309] 
	I0318 14:27:21.108127 1128583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:27:21.108145 1128583 kubeadm.go:309] 
	I0318 14:27:21.108228 1128583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:27:21.108242 1128583 kubeadm.go:309] 
	I0318 14:27:21.108318 1128583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:27:21.108400 1128583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:27:21.108487 1128583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:27:21.108503 1128583 kubeadm.go:309] 
	I0318 14:27:21.108628 1128583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:27:21.108730 1128583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:27:21.108741 1128583 kubeadm.go:309] 
	I0318 14:27:21.108839 1128583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.108968 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 \
	I0318 14:27:21.109031 1128583 kubeadm.go:309] 	--control-plane 
	I0318 14:27:21.109054 1128583 kubeadm.go:309] 
	I0318 14:27:21.109176 1128583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:27:21.109195 1128583 kubeadm.go:309] 
	I0318 14:27:21.109298 1128583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oq5d1l.24j9td8ex727h998 \
	I0318 14:27:21.109455 1128583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8800b8ca6c9ae9bf44f58b1aa41e3236025b4a6bd7cc38a87770348630144112 
	I0318 14:27:21.114992 1128583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:21.115128 1128583 cni.go:84] Creating CNI manager for ""
	I0318 14:27:21.115151 1128583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:27:21.116940 1128583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:27:21.118320 1128583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:27:21.167945 1128583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:27:21.256429 1128583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:21.256510 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-188109 minikube.k8s.io/updated_at=2024_03_18T14_27_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=no-preload-188109 minikube.k8s.io/primary=true
	I0318 14:27:21.315419 1128583 ops.go:34] apiserver oom_adj: -16
	I0318 14:27:21.530472 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.030814 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:22.531214 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.030869 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:23.530677 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.031137 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:24.531400 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.031455 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:25.530648 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.031501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:26.531399 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.031109 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:27.531261 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.030757 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:28.531295 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.030505 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:29.531501 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.030996 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:30.530490 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.030520 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:31.531340 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.031217 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:32.531425 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.031231 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.531300 1128583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:27:33.678904 1128583 kubeadm.go:1107] duration metric: took 12.422463336s to wait for elevateKubeSystemPrivileges
	W0318 14:27:33.678959 1128583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:27:33.678972 1128583 kubeadm.go:393] duration metric: took 5m17.305262011s to StartCluster
	I0318 14:27:33.678999 1128583 settings.go:142] acquiring lock: {Name:mk23b63ad1e756fd8419b9ef4f888d1027cfea81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.679119 1128583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:27:33.681595 1128583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18427-1067917/kubeconfig: {Name:mk71ffd4ec592a2be1ff677ee77c3de990498ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:27:33.681893 1128583 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.40 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:27:33.683724 1128583 out.go:177] * Verifying Kubernetes components...
	I0318 14:27:33.682059 1128583 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:27:33.682122 1128583 config.go:182] Loaded profile config "no-preload-188109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:27:33.685123 1128583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:27:33.685131 1128583 addons.go:69] Setting default-storageclass=true in profile "no-preload-188109"
	I0318 14:27:33.685135 1128583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188109"
	I0318 14:27:33.685139 1128583 addons.go:69] Setting metrics-server=true in profile "no-preload-188109"
	I0318 14:27:33.685165 1128583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188109"
	I0318 14:27:33.685173 1128583 addons.go:234] Setting addon metrics-server=true in "no-preload-188109"
	I0318 14:27:33.685175 1128583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-188109"
	W0318 14:27:33.685182 1128583 addons.go:243] addon metrics-server should already be in state true
	W0318 14:27:33.685185 1128583 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:27:33.685231 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685238 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.685573 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685575 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685613 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685617 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.685629 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.685637 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.703022 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0318 14:27:33.703262 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 14:27:33.703844 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704181 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.704628 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704649 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.704715 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.704736 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.705213 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705374 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.705809 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705863 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.705911 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.705987 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.706076 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0318 14:27:33.706558 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.707198 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.707222 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.707699 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.708354 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.712289 1128583 addons.go:234] Setting addon default-storageclass=true in "no-preload-188109"
	W0318 14:27:33.712323 1128583 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:27:33.712364 1128583 host.go:66] Checking if "no-preload-188109" exists ...
	I0318 14:27:33.712795 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.712833 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.724381 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0318 14:27:33.724980 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.725587 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.725614 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.726054 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.726363 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.727777 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0318 14:27:33.728182 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.728497 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.730538 1128583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:27:33.729152 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.730851 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0318 14:27:33.732037 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:27:33.732055 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:27:33.732076 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.732113 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.732489 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.732593 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.732881 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.732979 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.732991 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.733604 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.734297 1128583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:27:33.734329 1128583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:27:33.735399 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.737266 1128583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:27:33.735988 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.736830 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.739081 1128583 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:33.739098 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:27:33.737327 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.739122 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.739142 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.740009 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.740263 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.740482 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.742702 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743181 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.743211 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.743473 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.743706 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.743902 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.744097 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.752903 1128583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0318 14:27:33.756275 1128583 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:27:33.756901 1128583 main.go:141] libmachine: Using API Version  1
	I0318 14:27:33.756932 1128583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:27:33.757363 1128583 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:27:33.757603 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetState
	I0318 14:27:33.759471 1128583 main.go:141] libmachine: (no-preload-188109) Calling .DriverName
	I0318 14:27:33.759732 1128583 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:33.759751 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:27:33.759772 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHHostname
	I0318 14:27:33.762687 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763139 1128583 main.go:141] libmachine: (no-preload-188109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:62:25", ip: ""} in network mk-no-preload-188109: {Iface:virbr2 ExpiryTime:2024-03-18 15:11:33 +0000 UTC Type:0 Mac:52:54:00:21:62:25 Iaid: IPaddr:192.168.61.40 Prefix:24 Hostname:no-preload-188109 Clientid:01:52:54:00:21:62:25}
	I0318 14:27:33.763162 1128583 main.go:141] libmachine: (no-preload-188109) DBG | domain no-preload-188109 has defined IP address 192.168.61.40 and MAC address 52:54:00:21:62:25 in network mk-no-preload-188109
	I0318 14:27:33.763414 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHPort
	I0318 14:27:33.763599 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHKeyPath
	I0318 14:27:33.763765 1128583 main.go:141] libmachine: (no-preload-188109) Calling .GetSSHUsername
	I0318 14:27:33.763919 1128583 sshutil.go:53] new ssh client: &{IP:192.168.61.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/no-preload-188109/id_rsa Username:docker}
	I0318 14:27:33.942490 1128583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:27:33.975796 1128583 node_ready.go:35] waiting up to 6m0s for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008100 1128583 node_ready.go:49] node "no-preload-188109" has status "Ready":"True"
	I0318 14:27:34.008135 1128583 node_ready.go:38] duration metric: took 32.281068ms for node "no-preload-188109" to be "Ready" ...
	I0318 14:27:34.008149 1128583 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:34.039370 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:34.067765 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:27:34.067798 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:27:34.088294 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:27:34.091931 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:27:34.121689 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:27:34.121722 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:27:34.183609 1128583 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:34.183638 1128583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:27:34.264906 1128583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:27:35.590900 1128583 pod_ready.go:92] pod "coredns-76f75df574-jk9v5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.590928 1128583 pod_ready.go:81] duration metric: took 1.551526097s for pod "coredns-76f75df574-jk9v5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.590938 1128583 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605647 1128583 pod_ready.go:92] pod "coredns-76f75df574-xczpc" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.605675 1128583 pod_ready.go:81] duration metric: took 14.730232ms for pod "coredns-76f75df574-xczpc" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.605685 1128583 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.613213 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.521243904s)
	I0318 14:27:35.613276 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613289 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613282 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.524948587s)
	I0318 14:27:35.613324 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613337 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613790 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.613811 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.613813 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.613824 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.613831 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614119 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614166 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614183 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.614191 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.614192 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.614234 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614273 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.614502 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.614517 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.636576 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.636610 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.636920 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.636946 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.656945 1128583 pod_ready.go:92] pod "etcd-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.656972 1128583 pod_ready.go:81] duration metric: took 51.280554ms for pod "etcd-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.656983 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683260 1128583 pod_ready.go:92] pod "kube-apiserver-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.683291 1128583 pod_ready.go:81] duration metric: took 26.301625ms for pod "kube-apiserver-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.683301 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.691855 1128583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42688194s)
	I0318 14:27:35.691918 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.691934 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692300 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692325 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692336 1128583 main.go:141] libmachine: Making call to close driver server
	I0318 14:27:35.692344 1128583 main.go:141] libmachine: (no-preload-188109) Calling .Close
	I0318 14:27:35.692661 1128583 main.go:141] libmachine: (no-preload-188109) DBG | Closing plugin on server side
	I0318 14:27:35.692701 1128583 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:27:35.692709 1128583 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:27:35.692721 1128583 addons.go:470] Verifying addon metrics-server=true in "no-preload-188109"
	I0318 14:27:35.694758 1128583 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 14:27:35.696004 1128583 addons.go:505] duration metric: took 2.013954954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 14:27:35.709010 1128583 pod_ready.go:92] pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.709035 1128583 pod_ready.go:81] duration metric: took 25.726967ms for pod "kube-controller-manager-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.709045 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982032 1128583 pod_ready.go:92] pod "kube-proxy-qpxx5" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:35.982080 1128583 pod_ready.go:81] duration metric: took 273.026354ms for pod "kube-proxy-qpxx5" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:35.982094 1128583 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380184 1128583 pod_ready.go:92] pod "kube-scheduler-no-preload-188109" in "kube-system" namespace has status "Ready":"True"
	I0318 14:27:36.380228 1128583 pod_ready.go:81] duration metric: took 398.123566ms for pod "kube-scheduler-no-preload-188109" in "kube-system" namespace to be "Ready" ...
	I0318 14:27:36.380241 1128583 pod_ready.go:38] duration metric: took 2.372078145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:27:36.380264 1128583 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:27:36.380334 1128583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:27:36.401316 1128583 api_server.go:72] duration metric: took 2.719374991s to wait for apiserver process to appear ...
	I0318 14:27:36.401358 1128583 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:27:36.401389 1128583 api_server.go:253] Checking apiserver healthz at https://192.168.61.40:8443/healthz ...
	I0318 14:27:36.407212 1128583 api_server.go:279] https://192.168.61.40:8443/healthz returned 200:
	ok
	I0318 14:27:36.408930 1128583 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:27:36.408966 1128583 api_server.go:131] duration metric: took 7.597771ms to wait for apiserver health ...
	I0318 14:27:36.408989 1128583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:27:36.583053 1128583 system_pods.go:59] 9 kube-system pods found
	I0318 14:27:36.583099 1128583 system_pods.go:61] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.583107 1128583 system_pods.go:61] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.583112 1128583 system_pods.go:61] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.583116 1128583 system_pods.go:61] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.583120 1128583 system_pods.go:61] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.583123 1128583 system_pods.go:61] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.583127 1128583 system_pods.go:61] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.583134 1128583 system_pods.go:61] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.583138 1128583 system_pods.go:61] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.583147 1128583 system_pods.go:74] duration metric: took 174.139423ms to wait for pod list to return data ...
	I0318 14:27:36.583156 1128583 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:27:36.779733 1128583 default_sa.go:45] found service account: "default"
	I0318 14:27:36.779771 1128583 default_sa.go:55] duration metric: took 196.607194ms for default service account to be created ...
	I0318 14:27:36.779783 1128583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 14:27:36.982750 1128583 system_pods.go:86] 9 kube-system pods found
	I0318 14:27:36.982783 1128583 system_pods.go:89] "coredns-76f75df574-jk9v5" [15ff991e-2c6b-49ad-bc69-c427d1f24610] Running
	I0318 14:27:36.982789 1128583 system_pods.go:89] "coredns-76f75df574-xczpc" [f09adcb8-dacb-4b1c-bbbf-9f056e89da3b] Running
	I0318 14:27:36.982793 1128583 system_pods.go:89] "etcd-no-preload-188109" [794c63fe-f690-4ed1-b405-7c493360bb5f] Running
	I0318 14:27:36.982798 1128583 system_pods.go:89] "kube-apiserver-no-preload-188109" [ba7067b5-5d3b-4305-856e-15a171b8ceaa] Running
	I0318 14:27:36.982804 1128583 system_pods.go:89] "kube-controller-manager-no-preload-188109" [43342e52-1443-4196-90ff-16de1810bd04] Running
	I0318 14:27:36.982808 1128583 system_pods.go:89] "kube-proxy-qpxx5" [a139949c-570d-438a-955a-03768aabf027] Running
	I0318 14:27:36.982812 1128583 system_pods.go:89] "kube-scheduler-no-preload-188109" [50b72b30-750f-4cd7-89e9-c4a402143bfe] Running
	I0318 14:27:36.982819 1128583 system_pods.go:89] "metrics-server-57f55c9bc5-9hjss" [87eb7974-1ffa-40d4-bb06-4963e92e1c7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:27:36.982823 1128583 system_pods.go:89] "storage-provisioner" [f0ad4b8a-2df8-4f2c-98aa-5c51f8b6052b] Running
	I0318 14:27:36.982832 1128583 system_pods.go:126] duration metric: took 203.042771ms to wait for k8s-apps to be running ...
	I0318 14:27:36.982839 1128583 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 14:27:36.982902 1128583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:37.000948 1128583 system_svc.go:56] duration metric: took 18.09435ms WaitForService to wait for kubelet
	I0318 14:27:37.000980 1128583 kubeadm.go:576] duration metric: took 3.319049387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:27:37.001005 1128583 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:27:37.180608 1128583 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:27:37.180639 1128583 node_conditions.go:123] node cpu capacity is 2
	I0318 14:27:37.180652 1128583 node_conditions.go:105] duration metric: took 179.641912ms to run NodePressure ...
	I0318 14:27:37.180665 1128583 start.go:240] waiting for startup goroutines ...
	I0318 14:27:37.180672 1128583 start.go:245] waiting for cluster config update ...
	I0318 14:27:37.180686 1128583 start.go:254] writing updated cluster config ...
	I0318 14:27:37.181004 1128583 ssh_runner.go:195] Run: rm -f paused
	I0318 14:27:37.236286 1128583 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 14:27:37.238455 1128583 out.go:177] * Done! kubectl is now configured to use "no-preload-188109" cluster and "default" namespace by default
	I0318 14:27:47.299396 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:27:47.299722 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:27:47.299759 1129259 kubeadm.go:309] 
	I0318 14:27:47.299848 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:27:47.300040 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:27:47.300062 1129259 kubeadm.go:309] 
	I0318 14:27:47.300106 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:27:47.300187 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:27:47.300340 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:27:47.300356 1129259 kubeadm.go:309] 
	I0318 14:27:47.300534 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:27:47.300590 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:27:47.300636 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:27:47.300646 1129259 kubeadm.go:309] 
	I0318 14:27:47.300803 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:27:47.300929 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:27:47.300942 1129259 kubeadm.go:309] 
	I0318 14:27:47.301093 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:27:47.301232 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:27:47.301346 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:27:47.301475 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:27:47.301496 1129259 kubeadm.go:309] 
	I0318 14:27:47.303477 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:27:47.303616 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:27:47.303718 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 14:27:47.303903 1129259 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 14:27:47.303969 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 14:27:47.790664 1129259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 14:27:47.807959 1129259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:27:47.820332 1129259 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:27:47.820357 1129259 kubeadm.go:156] found existing configuration files:
	
	I0318 14:27:47.820422 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:27:47.832124 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:27:47.832219 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:27:47.845017 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:27:47.856877 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:27:47.856954 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:27:47.868530 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.879309 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:27:47.879394 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:27:47.891766 1129259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:27:47.903303 1129259 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:27:47.903392 1129259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:27:47.914820 1129259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:27:48.170124 1129259 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:29:44.224147 1129259 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 14:29:44.224414 1129259 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 14:29:44.225789 1129259 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 14:29:44.225885 1129259 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:29:44.226010 1129259 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:29:44.226135 1129259 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:29:44.226292 1129259 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:29:44.226384 1129259 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:29:44.228246 1129259 out.go:204]   - Generating certificates and keys ...
	I0318 14:29:44.228346 1129259 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:29:44.228440 1129259 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:29:44.228567 1129259 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 14:29:44.228684 1129259 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 14:29:44.228803 1129259 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 14:29:44.228874 1129259 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 14:29:44.229018 1129259 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 14:29:44.229096 1129259 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 14:29:44.229166 1129259 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 14:29:44.229231 1129259 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 14:29:44.229269 1129259 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 14:29:44.229316 1129259 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:29:44.229365 1129259 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:29:44.229415 1129259 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:29:44.229468 1129259 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:29:44.229540 1129259 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:29:44.229663 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:29:44.229755 1129259 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:29:44.229804 1129259 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:29:44.229893 1129259 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:29:44.231359 1129259 out.go:204]   - Booting up control plane ...
	I0318 14:29:44.231484 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:29:44.231592 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:29:44.231674 1129259 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:29:44.231777 1129259 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:29:44.231993 1129259 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:29:44.232046 1129259 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 14:29:44.232103 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232333 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232411 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232621 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232691 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.232896 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.232955 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233113 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233178 1129259 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 14:29:44.233370 1129259 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 14:29:44.233382 1129259 kubeadm.go:309] 
	I0318 14:29:44.233430 1129259 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 14:29:44.233480 1129259 kubeadm.go:309] 		timed out waiting for the condition
	I0318 14:29:44.233492 1129259 kubeadm.go:309] 
	I0318 14:29:44.233523 1129259 kubeadm.go:309] 	This error is likely caused by:
	I0318 14:29:44.233554 1129259 kubeadm.go:309] 		- The kubelet is not running
	I0318 14:29:44.233642 1129259 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 14:29:44.233655 1129259 kubeadm.go:309] 
	I0318 14:29:44.233797 1129259 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 14:29:44.233830 1129259 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 14:29:44.233860 1129259 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 14:29:44.233867 1129259 kubeadm.go:309] 
	I0318 14:29:44.233994 1129259 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 14:29:44.234116 1129259 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 14:29:44.234124 1129259 kubeadm.go:309] 
	I0318 14:29:44.234246 1129259 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 14:29:44.234389 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 14:29:44.234516 1129259 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 14:29:44.234606 1129259 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 14:29:44.234676 1129259 kubeadm.go:309] 
	I0318 14:29:44.234699 1129259 kubeadm.go:393] duration metric: took 7m59.028536241s to StartCluster
	I0318 14:29:44.234794 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 14:29:44.234989 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 14:29:44.301714 1129259 cri.go:89] found id: ""
	I0318 14:29:44.301764 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.301792 1129259 logs.go:278] No container was found matching "kube-apiserver"
	I0318 14:29:44.301801 1129259 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 14:29:44.301865 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 14:29:44.345158 1129259 cri.go:89] found id: ""
	I0318 14:29:44.345197 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.345209 1129259 logs.go:278] No container was found matching "etcd"
	I0318 14:29:44.345217 1129259 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 14:29:44.345281 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 14:29:44.381184 1129259 cri.go:89] found id: ""
	I0318 14:29:44.381217 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.381227 1129259 logs.go:278] No container was found matching "coredns"
	I0318 14:29:44.381232 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 14:29:44.381296 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 14:29:44.419906 1129259 cri.go:89] found id: ""
	I0318 14:29:44.419972 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.419987 1129259 logs.go:278] No container was found matching "kube-scheduler"
	I0318 14:29:44.419996 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 14:29:44.420085 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 14:29:44.459683 1129259 cri.go:89] found id: ""
	I0318 14:29:44.459732 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.459747 1129259 logs.go:278] No container was found matching "kube-proxy"
	I0318 14:29:44.459755 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 14:29:44.459848 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 14:29:44.502434 1129259 cri.go:89] found id: ""
	I0318 14:29:44.502477 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.502490 1129259 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 14:29:44.502499 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 14:29:44.502563 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 14:29:44.543384 1129259 cri.go:89] found id: ""
	I0318 14:29:44.543417 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.543429 1129259 logs.go:278] No container was found matching "kindnet"
	I0318 14:29:44.543438 1129259 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 14:29:44.543509 1129259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 14:29:44.584405 1129259 cri.go:89] found id: ""
	I0318 14:29:44.584450 1129259 logs.go:276] 0 containers: []
	W0318 14:29:44.584463 1129259 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 14:29:44.584478 1129259 logs.go:123] Gathering logs for kubelet ...
	I0318 14:29:44.584496 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 14:29:44.638997 1129259 logs.go:123] Gathering logs for dmesg ...
	I0318 14:29:44.639036 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 14:29:44.656641 1129259 logs.go:123] Gathering logs for describe nodes ...
	I0318 14:29:44.656679 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 14:29:44.757942 1129259 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 14:29:44.757976 1129259 logs.go:123] Gathering logs for CRI-O ...
	I0318 14:29:44.757994 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 14:29:44.878791 1129259 logs.go:123] Gathering logs for container status ...
	I0318 14:29:44.878838 1129259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 14:29:44.926371 1129259 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 14:29:44.926432 1129259 out.go:239] * 
	W0318 14:29:44.926513 1129259 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.926548 1129259 out.go:239] * 
	W0318 14:29:44.927402 1129259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 14:29:44.931815 1129259 out.go:177] 
	W0318 14:29:44.933471 1129259 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 14:29:44.933562 1129259 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 14:29:44.933609 1129259 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 14:29:44.935544 1129259 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.830184183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772844830156017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65e5a44d-0a55-4162-a64e-2b612848b65d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.830889078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1b4e23a-e069-4e93-9930-d82f693f7769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.830962767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1b4e23a-e069-4e93-9930-d82f693f7769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.830994707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d1b4e23a-e069-4e93-9930-d82f693f7769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.872337329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a606fa8-6575-4297-bd30-b92b09443fbd name=/runtime.v1.RuntimeService/Version
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.872439348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a606fa8-6575-4297-bd30-b92b09443fbd name=/runtime.v1.RuntimeService/Version
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.873701497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3fd06fe-1145-40a9-8476-f9b44c33ffd7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.874225326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772844874199784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3fd06fe-1145-40a9-8476-f9b44c33ffd7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.875000228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bcb9a15-23d7-4dd0-8eff-4e9d5aa7af04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.875055243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bcb9a15-23d7-4dd0-8eff-4e9d5aa7af04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.875092923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0bcb9a15-23d7-4dd0-8eff-4e9d5aa7af04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.911351250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f717daa-c624-41a1-a945-3f34e6fced30 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.911459535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f717daa-c624-41a1-a945-3f34e6fced30 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.912983642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=171740f6-453e-4072-8533-640dc1af8826 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.913663977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772844913617980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=171740f6-453e-4072-8533-640dc1af8826 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.914428399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fdef946-2453-49ca-beba-f8db1c49613b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.914481735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fdef946-2453-49ca-beba-f8db1c49613b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.914519174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1fdef946-2453-49ca-beba-f8db1c49613b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.949045874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c606236-bcf7-4ace-9806-40cfe87a0119 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.949132007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c606236-bcf7-4ace-9806-40cfe87a0119 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.950602268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87627ca7-4f0e-48c2-ae63-5922b0477aa4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.950971539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710772844950950473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87627ca7-4f0e-48c2-ae63-5922b0477aa4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.951513222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b96300f1-c052-4535-ae3c-aba45da8e62c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.951577910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b96300f1-c052-4535-ae3c-aba45da8e62c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:40:44 old-k8s-version-782728 crio[653]: time="2024-03-18 14:40:44.951614012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b96300f1-c052-4535-ae3c-aba45da8e62c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 14:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052875] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041790] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922199] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.676692] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.118921] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062985] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068114] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.236009] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.138019] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.296452] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.964415] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.070819] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.228114] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[  +9.153953] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 14:25] systemd-fstab-generator[4965]: Ignoring "noauto" option for root device
	[Mar18 14:27] systemd-fstab-generator[5242]: Ignoring "noauto" option for root device
	[  +0.076165] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:40:45 up 19 min,  0 users,  load average: 0.10, 0.08, 0.08
	Linux old-k8s-version-782728 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007486f0)
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009dbef0, 0x4f0ac20, 0xc000051bd0, 0x1, 0xc00009e0c0)
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000be2540, 0xc00009e0c0)
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b76e60, 0xc000670680)
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 18 14:40:43 old-k8s-version-782728 kubelet[6697]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 18 14:40:43 old-k8s-version-782728 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 14:40:43 old-k8s-version-782728 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 14:40:44 old-k8s-version-782728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 134.
	Mar 18 14:40:44 old-k8s-version-782728 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 14:40:44 old-k8s-version-782728 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 14:40:44 old-k8s-version-782728 kubelet[6724]: I0318 14:40:44.499753    6724 server.go:416] Version: v1.20.0
	Mar 18 14:40:44 old-k8s-version-782728 kubelet[6724]: I0318 14:40:44.500657    6724 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 14:40:44 old-k8s-version-782728 kubelet[6724]: I0318 14:40:44.504048    6724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 14:40:44 old-k8s-version-782728 kubelet[6724]: I0318 14:40:44.505770    6724 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 18 14:40:44 old-k8s-version-782728 kubelet[6724]: W0318 14:40:44.505785    6724 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 2 (277.521198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-782728" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (114.49s)

                                                
                                    

Test pass (256/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 5.94
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.85
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.16
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.6
31 TestOffline 127.76
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 146.35
38 TestAddons/parallel/Registry 19.51
40 TestAddons/parallel/InspektorGadget 12.12
42 TestAddons/parallel/HelmTiller 12.08
44 TestAddons/parallel/CSI 71.68
45 TestAddons/parallel/Headlamp 16.53
46 TestAddons/parallel/CloudSpanner 5.62
47 TestAddons/parallel/LocalPath 12.46
48 TestAddons/parallel/NvidiaDevicePlugin 6.62
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 47.12
55 TestCertExpiration 298.56
57 TestForceSystemdFlag 55.27
58 TestForceSystemdEnv 49.82
60 TestKVMDriverInstallOrUpdate 3.89
64 TestErrorSpam/setup 43.94
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.8
67 TestErrorSpam/pause 1.68
68 TestErrorSpam/unpause 1.73
69 TestErrorSpam/stop 6.19
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 98.33
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 53.39
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
81 TestFunctional/serial/CacheCmd/cache/add_local 1.98
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 425.81
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.27
92 TestFunctional/serial/LogsFileCmd 1.29
93 TestFunctional/serial/InvalidService 4.32
95 TestFunctional/parallel/ConfigCmd 0.47
97 TestFunctional/parallel/DryRun 0.41
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.24
103 TestFunctional/parallel/ServiceCmdConnect 12.61
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 44.46
107 TestFunctional/parallel/SSHCmd 0.48
108 TestFunctional/parallel/CpCmd 1.37
109 TestFunctional/parallel/MySQL 26.72
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.44
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
119 TestFunctional/parallel/License 0.17
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
133 TestFunctional/parallel/ServiceCmd/List 0.52
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
136 TestFunctional/parallel/ServiceCmd/Format 0.37
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 0.7
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 2.64
144 TestFunctional/parallel/ImageCommands/Setup 1.55
145 TestFunctional/parallel/ServiceCmd/URL 0.61
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.86
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
148 TestFunctional/parallel/ProfileCmd/profile_list 0.39
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
150 TestFunctional/parallel/MountCmd/any-port 16.27
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.86
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.72
153 TestFunctional/parallel/MountCmd/specific-port 2.1
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.15
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.6
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.14
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 214.92
166 TestMultiControlPlane/serial/DeployApp 5.96
167 TestMultiControlPlane/serial/PingHostFromPods 1.48
168 TestMultiControlPlane/serial/AddWorkerNode 50.14
169 TestMultiControlPlane/serial/NodeLabels 0.08
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMultiControlPlane/serial/CopyFile 14.04
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.7
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.41
180 TestMultiControlPlane/serial/RestartCluster 354.67
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
182 TestMultiControlPlane/serial/AddSecondaryNode 75.57
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.58
187 TestJSONOutput/start/Command 98.23
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.76
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.66
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.46
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.23
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 94.51
219 TestMountStart/serial/StartWithMountFirst 27.89
220 TestMountStart/serial/VerifyMountFirst 0.41
221 TestMountStart/serial/StartWithMountSecond 27.76
222 TestMountStart/serial/VerifyMountSecond 0.41
223 TestMountStart/serial/DeleteFirst 0.71
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 22.5
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 104.34
231 TestMultiNode/serial/DeployApp2Nodes 4.68
232 TestMultiNode/serial/PingHostFrom2Pods 0.95
233 TestMultiNode/serial/AddNode 44.65
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.9
237 TestMultiNode/serial/StopNode 2.5
238 TestMultiNode/serial/StartAfterStop 29.88
240 TestMultiNode/serial/DeleteNode 2.49
242 TestMultiNode/serial/RestartMultiNode 170.19
243 TestMultiNode/serial/ValidateNameConflict 43.97
250 TestScheduledStopUnix 115.3
254 TestRunningBinaryUpgrade 247.39
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 96.88
268 TestNetworkPlugins/group/false 6.63
279 TestNoKubernetes/serial/StartWithStopK8s 39.9
280 TestNoKubernetes/serial/Start 54.58
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
282 TestNoKubernetes/serial/ProfileList 13.44
283 TestNoKubernetes/serial/Stop 1.38
284 TestNoKubernetes/serial/StartNoArgs 30.06
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
287 TestPause/serial/Start 99.69
288 TestStoppedBinaryUpgrade/Setup 2.01
289 TestStoppedBinaryUpgrade/Upgrade 114.83
290 TestPause/serial/SecondStartNoReconfiguration 55.52
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
292 TestNetworkPlugins/group/auto/Start 127.94
293 TestPause/serial/Pause 0.9
294 TestPause/serial/VerifyStatus 0.33
295 TestPause/serial/Unpause 0.9
296 TestPause/serial/PauseAgain 1.17
297 TestPause/serial/DeletePaused 1.18
298 TestPause/serial/VerifyDeletedResources 0.61
299 TestNetworkPlugins/group/kindnet/Start 88.95
300 TestNetworkPlugins/group/calico/Start 124.67
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
303 TestNetworkPlugins/group/kindnet/NetCatPod 12.26
304 TestNetworkPlugins/group/auto/KubeletFlags 0.31
305 TestNetworkPlugins/group/auto/NetCatPod 14.43
306 TestNetworkPlugins/group/kindnet/DNS 0.22
307 TestNetworkPlugins/group/kindnet/Localhost 0.18
308 TestNetworkPlugins/group/kindnet/HairPin 0.16
309 TestNetworkPlugins/group/auto/DNS 0.21
310 TestNetworkPlugins/group/auto/Localhost 0.19
311 TestNetworkPlugins/group/auto/HairPin 0.2
312 TestNetworkPlugins/group/custom-flannel/Start 83.99
313 TestNetworkPlugins/group/enable-default-cni/Start 99.1
314 TestNetworkPlugins/group/flannel/Start 136.78
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.22
317 TestNetworkPlugins/group/calico/NetCatPod 10.24
318 TestNetworkPlugins/group/calico/DNS 0.19
319 TestNetworkPlugins/group/calico/Localhost 0.16
320 TestNetworkPlugins/group/calico/HairPin 0.19
321 TestNetworkPlugins/group/bridge/Start 109.53
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
324 TestNetworkPlugins/group/custom-flannel/DNS 0.24
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
335 TestStartStop/group/no-preload/serial/FirstStart 132.28
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
338 TestNetworkPlugins/group/flannel/NetCatPod 15.31
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
340 TestNetworkPlugins/group/bridge/NetCatPod 12.36
341 TestNetworkPlugins/group/flannel/DNS 0.17
342 TestNetworkPlugins/group/flannel/Localhost 0.14
343 TestNetworkPlugins/group/bridge/DNS 0.2
344 TestNetworkPlugins/group/flannel/HairPin 0.14
345 TestNetworkPlugins/group/bridge/Localhost 0.17
346 TestNetworkPlugins/group/bridge/HairPin 0.15
348 TestStartStop/group/embed-certs/serial/FirstStart 102.46
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 123.52
351 TestStartStop/group/no-preload/serial/DeployApp 8.33
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
354 TestStartStop/group/embed-certs/serial/DeployApp 9.29
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
363 TestStartStop/group/no-preload/serial/SecondStart 701.79
365 TestStartStop/group/embed-certs/serial/SecondStart 599.95
367 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 600.84
368 TestStartStop/group/old-k8s-version/serial/Stop 2.37
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
380 TestStartStop/group/newest-cni/serial/FirstStart 56.99
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
383 TestStartStop/group/newest-cni/serial/Stop 11.36
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
385 TestStartStop/group/newest-cni/serial/SecondStart 39.26
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
389 TestStartStop/group/newest-cni/serial/Pause 2.6
x
+
TestDownloadOnly/v1.20.0/json-events (11.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-091393 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-091393 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.66844491s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-091393
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-091393: exit status 85 (79.352249ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |          |
	|         | -p download-only-091393        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:44:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:44:42.317775 1075220 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:44:42.318083 1075220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:44:42.318095 1075220 out.go:304] Setting ErrFile to fd 2...
	I0318 12:44:42.318102 1075220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:44:42.318309 1075220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	W0318 12:44:42.318488 1075220 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18427-1067917/.minikube/config/config.json: open /home/jenkins/minikube-integration/18427-1067917/.minikube/config/config.json: no such file or directory
	I0318 12:44:42.319116 1075220 out.go:298] Setting JSON to true
	I0318 12:44:42.320374 1075220 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16029,"bootTime":1710749853,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:44:42.320453 1075220 start.go:139] virtualization: kvm guest
	I0318 12:44:42.323126 1075220 out.go:97] [download-only-091393] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:44:42.324748 1075220 out.go:169] MINIKUBE_LOCATION=18427
	W0318 12:44:42.323280 1075220 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 12:44:42.323353 1075220 notify.go:220] Checking for updates...
	I0318 12:44:42.326608 1075220 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:44:42.328327 1075220 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:44:42.329783 1075220 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:44:42.331259 1075220 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 12:44:42.333555 1075220 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 12:44:42.333862 1075220 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:44:42.368342 1075220 out.go:97] Using the kvm2 driver based on user configuration
	I0318 12:44:42.368380 1075220 start.go:297] selected driver: kvm2
	I0318 12:44:42.368395 1075220 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:44:42.368746 1075220 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:44:42.368832 1075220 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:44:42.384871 1075220 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:44:42.384952 1075220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:44:42.385539 1075220 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 12:44:42.385681 1075220 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 12:44:42.385748 1075220 cni.go:84] Creating CNI manager for ""
	I0318 12:44:42.385772 1075220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:44:42.385781 1075220 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:44:42.385842 1075220 start.go:340] cluster config:
	{Name:download-only-091393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-091393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:44:42.386061 1075220 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:44:42.387936 1075220 out.go:97] Downloading VM boot image ...
	I0318 12:44:42.387990 1075220 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:44:47.069497 1075220 out.go:97] Starting "download-only-091393" primary control-plane node in "download-only-091393" cluster
	I0318 12:44:47.069531 1075220 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 12:44:47.096788 1075220 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 12:44:47.096832 1075220 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:47.097000 1075220 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 12:44:47.099160 1075220 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 12:44:47.099198 1075220 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:44:47.124693 1075220 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-091393 host does not exist
	  To start a cluster, run: "minikube start -p download-only-091393"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-091393
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-994148 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-994148 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.941371214s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-994148
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-994148: exit status 85 (80.938201ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |                     |
	|         | -p download-only-091393        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| delete  | -p download-only-091393        | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| start   | -o=json --download-only        | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |                     |
	|         | -p download-only-994148        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:44:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:44:54.368928 1075390 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:44:54.369095 1075390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:44:54.369108 1075390 out.go:304] Setting ErrFile to fd 2...
	I0318 12:44:54.369112 1075390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:44:54.369364 1075390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 12:44:54.370065 1075390 out.go:298] Setting JSON to true
	I0318 12:44:54.371265 1075390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16041,"bootTime":1710749853,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:44:54.371343 1075390 start.go:139] virtualization: kvm guest
	I0318 12:44:54.373688 1075390 out.go:97] [download-only-994148] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:44:54.375313 1075390 out.go:169] MINIKUBE_LOCATION=18427
	I0318 12:44:54.373955 1075390 notify.go:220] Checking for updates...
	I0318 12:44:54.378310 1075390 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:44:54.379876 1075390 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:44:54.381257 1075390 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:44:54.382643 1075390 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 12:44:54.385264 1075390 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 12:44:54.385537 1075390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:44:54.419237 1075390 out.go:97] Using the kvm2 driver based on user configuration
	I0318 12:44:54.419283 1075390 start.go:297] selected driver: kvm2
	I0318 12:44:54.419304 1075390 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:44:54.419710 1075390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:44:54.419851 1075390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:44:54.436307 1075390 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:44:54.436389 1075390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:44:54.436906 1075390 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 12:44:54.437069 1075390 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 12:44:54.437140 1075390 cni.go:84] Creating CNI manager for ""
	I0318 12:44:54.437153 1075390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:44:54.437162 1075390 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:44:54.437220 1075390 start.go:340] cluster config:
	{Name:download-only-994148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-994148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:44:54.437317 1075390 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:44:54.438900 1075390 out.go:97] Starting "download-only-994148" primary control-plane node in "download-only-994148" cluster
	I0318 12:44:54.438930 1075390 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:44:54.462226 1075390 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:44:54.462265 1075390 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:54.462433 1075390 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:44:54.464142 1075390 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 12:44:54.464164 1075390 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:44:54.486611 1075390 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:44:58.695002 1075390 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:44:58.695128 1075390 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-994148 host does not exist
	  To start a cluster, run: "minikube start -p download-only-994148"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-994148
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-954927 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-954927 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.850375301s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-954927
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-954927: exit status 85 (81.118374ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |                     |
	|         | -p download-only-091393           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| delete  | -p download-only-091393           | download-only-091393 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| start   | -o=json --download-only           | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:44 UTC |                     |
	|         | -p download-only-994148           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| delete  | -p download-only-994148           | download-only-994148 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC | 18 Mar 24 12:45 UTC |
	| start   | -o=json --download-only           | download-only-954927 | jenkins | v1.32.0 | 18 Mar 24 12:45 UTC |                     |
	|         | -p download-only-954927           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:45:00
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:45:00.683692 1075556 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:45:00.683892 1075556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:45:00.683905 1075556 out.go:304] Setting ErrFile to fd 2...
	I0318 12:45:00.683909 1075556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:45:00.684127 1075556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 12:45:00.684780 1075556 out.go:298] Setting JSON to true
	I0318 12:45:00.685897 1075556 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":16048,"bootTime":1710749853,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:45:00.685974 1075556 start.go:139] virtualization: kvm guest
	I0318 12:45:00.688425 1075556 out.go:97] [download-only-954927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:45:00.690125 1075556 out.go:169] MINIKUBE_LOCATION=18427
	I0318 12:45:00.688679 1075556 notify.go:220] Checking for updates...
	I0318 12:45:00.692744 1075556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:45:00.694247 1075556 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 12:45:00.695809 1075556 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 12:45:00.697277 1075556 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 12:45:00.699703 1075556 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 12:45:00.700027 1075556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:45:00.733423 1075556 out.go:97] Using the kvm2 driver based on user configuration
	I0318 12:45:00.733480 1075556 start.go:297] selected driver: kvm2
	I0318 12:45:00.733498 1075556 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:45:00.733850 1075556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:45:00.733976 1075556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18427-1067917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:45:00.749814 1075556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:45:00.749883 1075556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:45:00.750391 1075556 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 12:45:00.750551 1075556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 12:45:00.750633 1075556 cni.go:84] Creating CNI manager for ""
	I0318 12:45:00.750650 1075556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:45:00.750662 1075556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:45:00.750741 1075556 start.go:340] cluster config:
	{Name:download-only-954927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-954927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:00.750866 1075556 iso.go:125] acquiring lock: {Name:mkb9005c39e1a5881f5d834c05544023d041c1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:45:00.752811 1075556 out.go:97] Starting "download-only-954927" primary control-plane node in "download-only-954927" cluster
	I0318 12:45:00.752837 1075556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 12:45:00.788891 1075556 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 12:45:00.788932 1075556 cache.go:56] Caching tarball of preloaded images
	I0318 12:45:00.789128 1075556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 12:45:00.791292 1075556 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 12:45:00.791325 1075556 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:45:00.811443 1075556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18427-1067917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-954927 host does not exist
	  To start a cluster, run: "minikube start -p download-only-954927"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-954927
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-502218 --alsologtostderr --binary-mirror http://127.0.0.1:38477 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-502218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-502218
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (127.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-096581 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-096581 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m6.427344058s)
helpers_test.go:175: Cleaning up "offline-crio-096581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-096581
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-096581: (1.336735768s)
--- PASS: TestOffline (127.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-106685
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-106685: exit status 85 (67.781206ms)

                                                
                                                
-- stdout --
	* Profile "addons-106685" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-106685"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-106685
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-106685: exit status 85 (69.502575ms)

                                                
                                                
-- stdout --
	* Profile "addons-106685" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-106685"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (146.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-106685 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-106685 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.35372474s)
--- PASS: TestAddons/Setup (146.35s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 25.795697ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vw2h8" [de58d932-6f78-479f-9d49-55619fa3881a] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006655237s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j97lj" [8ea57f10-a30d-4291-9636-1e99d163e226] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004752668s
addons_test.go:340: (dbg) Run:  kubectl --context addons-106685 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-106685 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-106685 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.965773676s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 addons disable registry --alsologtostderr -v=1: (1.314597047s)
--- PASS: TestAddons/parallel/Registry (19.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d9pkd" [3941952b-6285-4bc9-ae33-4e5fb135b104] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
2024/03/18 12:47:55 [DEBUG] GET http://192.168.39.205:5000
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003771693s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-106685
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-106685: (6.112062686s)
--- PASS: TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.799991ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-599zv" [bf1c4d73-4b36-4d8d-a497-58eeab0d4f6d] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006335941s
addons_test.go:473: (dbg) Run:  kubectl --context addons-106685 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-106685 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.955229778s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 addons disable helm-tiller --alsologtostderr -v=1: (1.110305985s)
--- PASS: TestAddons/parallel/HelmTiller (12.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (71.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.579633ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-106685 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-106685 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b13dbd3c-1913-4e7b-a366-d8af8f936745] Pending
helpers_test.go:344: "task-pv-pod" [b13dbd3c-1913-4e7b-a366-d8af8f936745] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b13dbd3c-1913-4e7b-a366-d8af8f936745] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.004980436s
addons_test.go:584: (dbg) Run:  kubectl --context addons-106685 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-106685 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-106685 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-106685 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-106685 delete pod task-pv-pod: (1.595796822s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-106685 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-106685 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-106685 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b27cd9f2-c3cd-4473-bf19-f03c37e1afbe] Pending
helpers_test.go:344: "task-pv-pod-restore" [b27cd9f2-c3cd-4473-bf19-f03c37e1afbe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b27cd9f2-c3cd-4473-bf19-f03c37e1afbe] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004385085s
addons_test.go:626: (dbg) Run:  kubectl --context addons-106685 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-106685 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-106685 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-106685 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.920612012s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (71.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-106685 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-106685 --alsologtostderr -v=1: (1.525449059s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-pgf48" [12add79b-561a-4482-8417-e1c41272f80c] Pending
helpers_test.go:344: "headlamp-5485c556b-pgf48" [12add79b-561a-4482-8417-e1c41272f80c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-pgf48" [12add79b-561a-4482-8417-e1c41272f80c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004590507s
--- PASS: TestAddons/parallel/Headlamp (16.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-t7xl8" [1471fde7-0973-4eaa-a6bc-a01b595958dc] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004720397s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-106685
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-106685 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-106685 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [68f7264d-80a3-41a2-b8f1-1a761d375d34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [68f7264d-80a3-41a2-b8f1-1a761d375d34] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [68f7264d-80a3-41a2-b8f1-1a761d375d34] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.007166755s
addons_test.go:891: (dbg) Run:  kubectl --context addons-106685 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 ssh "cat /opt/local-path-provisioner/pvc-e86d5e17-8190-4e06-8916-09db8624ca3e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-106685 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-106685 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-106685 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rgg96" [375e6fa2-ca11-40df-b093-1c93e6401092] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005112663s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-106685
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-2l56b" [60ebbf3d-9ad7-46d5-8322-97199a8c455a] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004690499s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-106685 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-106685 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (47.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-782791 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-782791 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.758774099s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-782791 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-782791 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-782791 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-782791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-782791
--- PASS: TestCertOptions (47.12s)

                                                
                                    
x
+
TestCertExpiration (298.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-277126 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-277126 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m17.972184999s)
E0318 14:02:37.319981 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-277126 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-277126 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.447345161s)
helpers_test.go:175: Cleaning up "cert-expiration-277126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-277126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-277126: (1.137200396s)
--- PASS: TestCertExpiration (298.56s)

                                                
                                    
x
+
TestForceSystemdFlag (55.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-563319 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-563319 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.00397371s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-563319 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-563319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-563319
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-563319: (1.043227944s)
--- PASS: TestForceSystemdFlag (55.27s)

                                                
                                    
x
+
TestForceSystemdEnv (49.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-147914 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-147914 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.792438809s)
helpers_test.go:175: Cleaning up "force-systemd-env-147914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-147914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-147914: (1.026302042s)
--- PASS: TestForceSystemdEnv (49.82s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.89s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.89s)

                                                
                                    
x
+
TestErrorSpam/setup (43.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-726434 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-726434 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-726434 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-726434 --driver=kvm2  --container-runtime=crio: (43.94293041s)
--- PASS: TestErrorSpam/setup (43.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (6.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 stop: (2.324091686s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 stop: (1.803096473s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-726434 --log_dir /tmp/nospam-726434 stop: (2.062923566s)
--- PASS: TestErrorSpam/stop (6.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18427-1067917/.minikube/files/etc/test/nested/copy/1075208/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044661 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-044661 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.324598637s)
--- PASS: TestFunctional/serial/StartWithProxy (98.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044661 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-044661 --alsologtostderr -v=8: (53.38677505s)
functional_test.go:659: soft start took 53.387647109s for "functional-044661" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-044661 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 cache add registry.k8s.io/pause:3.3: (1.060664215s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 cache add registry.k8s.io/pause:latest: (1.028448499s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-044661 /tmp/TestFunctionalserialCacheCmdcacheadd_local1034233102/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cache add minikube-local-cache-test:functional-044661
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 cache add minikube-local-cache-test:functional-044661: (1.591643862s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cache delete minikube-local-cache-test:functional-044661
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-044661
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.064154ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 kubectl -- --context functional-044661 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-044661 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (425.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044661 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0318 12:57:37.321772 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.327557 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.337820 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.358102 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.398452 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.478795 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.639254 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:37.959913 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:38.600846 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:39.881398 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:42.442515 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:47.563093 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:57:57.804053 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:58:18.284892 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 12:58:59.245313 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:00:21.167901 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:02:37.321690 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:03:05.008265 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-044661 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (7m5.804267681s)
functional_test.go:757: restart took 7m5.804479574s for "functional-044661" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (425.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-044661 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 logs: (1.271736098s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 logs --file /tmp/TestFunctionalserialLogsFileCmd4164714925/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 logs --file /tmp/TestFunctionalserialLogsFileCmd4164714925/001/logs.txt: (1.285119778s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-044661 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-044661
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-044661: exit status 115 (304.339451ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.198:30619 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-044661 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 config get cpus: exit status 14 (74.657988ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 config get cpus: exit status 14 (76.98698ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-044661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (160.661237ms)

                                                
                                                
-- stdout --
	* [functional-044661] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:04:47.583259 1084014 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:04:47.583582 1084014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:04:47.583598 1084014 out.go:304] Setting ErrFile to fd 2...
	I0318 13:04:47.583604 1084014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:04:47.583907 1084014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:04:47.584575 1084014 out.go:298] Setting JSON to false
	I0318 13:04:47.585740 1084014 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17235,"bootTime":1710749853,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:04:47.585814 1084014 start.go:139] virtualization: kvm guest
	I0318 13:04:47.587893 1084014 out.go:177] * [functional-044661] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:04:47.589716 1084014 notify.go:220] Checking for updates...
	I0318 13:04:47.589721 1084014 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:04:47.591284 1084014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:04:47.592612 1084014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:04:47.593944 1084014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:04:47.595300 1084014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:04:47.596667 1084014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:04:47.598623 1084014 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:04:47.599254 1084014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:04:47.599319 1084014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:04:47.619760 1084014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0318 13:04:47.620366 1084014 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:04:47.621167 1084014 main.go:141] libmachine: Using API Version  1
	I0318 13:04:47.621200 1084014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:04:47.621591 1084014 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:04:47.621813 1084014 main.go:141] libmachine: (functional-044661) Calling .DriverName
	I0318 13:04:47.622158 1084014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:04:47.622486 1084014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:04:47.622530 1084014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:04:47.637466 1084014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0318 13:04:47.637936 1084014 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:04:47.638446 1084014 main.go:141] libmachine: Using API Version  1
	I0318 13:04:47.638474 1084014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:04:47.638833 1084014 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:04:47.639050 1084014 main.go:141] libmachine: (functional-044661) Calling .DriverName
	I0318 13:04:47.674334 1084014 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:04:47.675636 1084014 start.go:297] selected driver: kvm2
	I0318 13:04:47.675664 1084014 start.go:901] validating driver "kvm2" against &{Name:functional-044661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-044661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:04:47.675883 1084014 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:04:47.678591 1084014 out.go:177] 
	W0318 13:04:47.679630 1084014 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 13:04:47.680957 1084014 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044661 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-044661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (174.307878ms)

                                                
                                                
-- stdout --
	* [functional-044661] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:04:48.011739 1084070 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:04:48.011906 1084070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:04:48.011917 1084070 out.go:304] Setting ErrFile to fd 2...
	I0318 13:04:48.011921 1084070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:04:48.012245 1084070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:04:48.012814 1084070 out.go:298] Setting JSON to false
	I0318 13:04:48.013823 1084070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17235,"bootTime":1710749853,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:04:48.013929 1084070 start.go:139] virtualization: kvm guest
	I0318 13:04:48.016423 1084070 out.go:177] * [functional-044661] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0318 13:04:48.017862 1084070 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 13:04:48.019291 1084070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:04:48.017877 1084070 notify.go:220] Checking for updates...
	I0318 13:04:48.022114 1084070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 13:04:48.023772 1084070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 13:04:48.025260 1084070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:04:48.026727 1084070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:04:48.028465 1084070 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:04:48.028844 1084070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:04:48.028893 1084070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:04:48.044248 1084070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0318 13:04:48.044627 1084070 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:04:48.045250 1084070 main.go:141] libmachine: Using API Version  1
	I0318 13:04:48.045279 1084070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:04:48.045609 1084070 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:04:48.045806 1084070 main.go:141] libmachine: (functional-044661) Calling .DriverName
	I0318 13:04:48.046087 1084070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:04:48.046511 1084070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:04:48.046581 1084070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:04:48.061823 1084070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0318 13:04:48.062331 1084070 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:04:48.062833 1084070 main.go:141] libmachine: Using API Version  1
	I0318 13:04:48.062858 1084070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:04:48.063225 1084070 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:04:48.063393 1084070 main.go:141] libmachine: (functional-044661) Calling .DriverName
	I0318 13:04:48.097198 1084070 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0318 13:04:48.098498 1084070 start.go:297] selected driver: kvm2
	I0318 13:04:48.098523 1084070 start.go:901] validating driver "kvm2" against &{Name:functional-044661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-044661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:04:48.098630 1084070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:04:48.100868 1084070 out.go:177] 
	W0318 13:04:48.102114 1084070 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 13:04:48.103269 1084070 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-044661 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-044661 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-52h22" [11e0448b-c51e-40b5-85c7-0c8f195ba010] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-52h22" [11e0448b-c51e-40b5-85c7-0c8f195ba010] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.00502104s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.198:31784
functional_test.go:1671: http://192.168.39.198:31784: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-52h22

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.198:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.198:31784
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [55d6dcce-caf3-4ca3-a51f-9b1f34aba0e3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00540557s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-044661 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-044661 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-044661 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-044661 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044661 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f5fe843e-a007-41a5-adf2-a717fd45bfbe] Pending
helpers_test.go:344: "sp-pod" [f5fe843e-a007-41a5-adf2-a717fd45bfbe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f5fe843e-a007-41a5-adf2-a717fd45bfbe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.00740659s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-044661 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-044661 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-044661 delete -f testdata/storage-provisioner/pod.yaml: (1.323162999s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044661 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [64d29328-a344-484d-aef4-1328988742bc] Pending
helpers_test.go:344: "sp-pod" [64d29328-a344-484d-aef4-1328988742bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [64d29328-a344-484d-aef4-1328988742bc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005556976s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-044661 exec sp-pod -- ls /tmp/mount
E0318 13:07:37.320645 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh -n functional-044661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cp functional-044661:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1975332255/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh -n functional-044661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh -n functional-044661 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-044661 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-kt8t8" [c38fdc46-99bb-466e-a85b-c3f2800cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-kt8t8" [c38fdc46-99bb-466e-a85b-c3f2800cbebc] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004547886s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-044661 exec mysql-859648c796-kt8t8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-044661 exec mysql-859648c796-kt8t8 -- mysql -ppassword -e "show databases;": exit status 1 (146.440856ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-044661 exec mysql-859648c796-kt8t8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-044661 exec mysql-859648c796-kt8t8 -- mysql -ppassword -e "show databases;": exit status 1 (146.503503ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-044661 exec mysql-859648c796-kt8t8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1075208/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /etc/test/nested/copy/1075208/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1075208.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /etc/ssl/certs/1075208.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1075208.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /usr/share/ca-certificates/1075208.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/10752082.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /etc/ssl/certs/10752082.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/10752082.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /usr/share/ca-certificates/10752082.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-044661 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh "sudo systemctl is-active docker": exit status 1 (289.990438ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh "sudo systemctl is-active containerd": exit status 1 (253.185396ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-044661 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-044661 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-lt97j" [371df4ae-c61b-414f-9460-d8bafb56af01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-lt97j" [371df4ae-c61b-414f-9460-d8bafb56af01] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003819119s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 service list -o json
functional_test.go:1490: Took "585.711584ms" to run "out/minikube-linux-amd64 -p functional-044661 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.198:32497
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044661 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-044661
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-044661
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044661 image ls --format short --alsologtostderr:
I0318 13:04:57.423113 1084936 out.go:291] Setting OutFile to fd 1 ...
I0318 13:04:57.423364 1084936 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.423374 1084936 out.go:304] Setting ErrFile to fd 2...
I0318 13:04:57.423378 1084936 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.423587 1084936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
I0318 13:04:57.424234 1084936 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.424351 1084936 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.424842 1084936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.424905 1084936 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.440495 1084936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
I0318 13:04:57.441055 1084936 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.441719 1084936 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.441753 1084936 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.442177 1084936 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.442376 1084936 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:57.444536 1084936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.444581 1084936 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.460521 1084936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
I0318 13:04:57.460980 1084936 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.461594 1084936 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.461636 1084936 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.462057 1084936 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.462279 1084936 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:57.462515 1084936 ssh_runner.go:195] Run: systemctl --version
I0318 13:04:57.462539 1084936 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:57.465954 1084936 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.466396 1084936 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:57.466425 1084936 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.466602 1084936 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:57.466765 1084936 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:57.466928 1084936 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:57.467089 1084936 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:57.551650 1084936 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 13:04:57.606518 1084936 main.go:141] libmachine: Making call to close driver server
I0318 13:04:57.606538 1084936 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:57.606867 1084936 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:57.606885 1084936 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:57.606894 1084936 main.go:141] libmachine: Making call to close driver server
I0318 13:04:57.606901 1084936 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:57.607177 1084936 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:57.607196 1084936 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044661 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| localhost/minikube-local-cache-test     | functional-044661  | 0ceb7722199a9 | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| gcr.io/google-containers/addon-resizer  | functional-044661  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044661 image ls --format table --alsologtostderr:
I0318 13:04:57.926272 1085047 out.go:291] Setting OutFile to fd 1 ...
I0318 13:04:57.926385 1085047 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.926397 1085047 out.go:304] Setting ErrFile to fd 2...
I0318 13:04:57.926402 1085047 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.926641 1085047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
I0318 13:04:57.927303 1085047 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.927425 1085047 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.927813 1085047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.927879 1085047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.943902 1085047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
I0318 13:04:57.944329 1085047 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.945041 1085047 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.945079 1085047 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.945433 1085047 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.945608 1085047 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:57.947332 1085047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.947374 1085047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.962906 1085047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
I0318 13:04:57.963362 1085047 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.963845 1085047 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.963870 1085047 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.964189 1085047 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.964362 1085047 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:57.964640 1085047 ssh_runner.go:195] Run: systemctl --version
I0318 13:04:57.964673 1085047 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:57.967242 1085047 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.967611 1085047 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:57.967644 1085047 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.967751 1085047 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:57.967958 1085047 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:57.968137 1085047 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:57.968368 1085047 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:58.054283 1085047 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 13:04:58.112647 1085047 main.go:141] libmachine: Making call to close driver server
I0318 13:04:58.112674 1085047 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:58.113045 1085047 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:58.113075 1085047 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:58.113085 1085047 main.go:141] libmachine: Making call to close driver server
I0318 13:04:58.113093 1085047 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:58.113044 1085047 main.go:141] libmachine: (functional-044661) DBG | Closing plugin on server side
I0318 13:04:58.113348 1085047 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:58.113365 1085047 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044661 image ls --format json --alsologtostderr:
[{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f17
48bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size
":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6c
c407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-044661"],"size":"34114467"},{"id":"0ceb7722199a9f35ba1d54aac
302f850653fc0889bfb471f1c7105550fe4e484","repoDigests":["localhost/minikube-local-cache-test@sha256:79d3b26a6f3ad2503235495ebbbfa0f3d75ec53cc32d5eac110e09fb9fbe2bba"],"repoTags":["localhost/minikube-local-cache-test:functional-044661"],"size":"3345"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoT
ags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k
8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044661 image ls --format json --alsologtostderr:
I0318 13:04:57.692562 1084992 out.go:291] Setting OutFile to fd 1 ...
I0318 13:04:57.692831 1084992 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.692845 1084992 out.go:304] Setting ErrFile to fd 2...
I0318 13:04:57.692850 1084992 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.693043 1084992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
I0318 13:04:57.693644 1084992 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.693773 1084992 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.694154 1084992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.694193 1084992 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.711339 1084992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
I0318 13:04:57.711848 1084992 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.712423 1084992 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.712449 1084992 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.712841 1084992 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.713100 1084992 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:57.715283 1084992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.715338 1084992 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.730134 1084992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
I0318 13:04:57.730602 1084992 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.731132 1084992 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.731161 1084992 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.731499 1084992 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.731725 1084992 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:57.732003 1084992 ssh_runner.go:195] Run: systemctl --version
I0318 13:04:57.732031 1084992 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:57.734885 1084992 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.735405 1084992 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:57.735447 1084992 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.735592 1084992 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:57.735857 1084992 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:57.736020 1084992 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:57.736160 1084992 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:57.816175 1084992 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 13:04:57.859090 1084992 main.go:141] libmachine: Making call to close driver server
I0318 13:04:57.859155 1084992 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:57.859457 1084992 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:57.859472 1084992 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:57.859482 1084992 main.go:141] libmachine: Making call to close driver server
I0318 13:04:57.859493 1084992 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:57.859535 1084992 main.go:141] libmachine: (functional-044661) DBG | Closing plugin on server side
I0318 13:04:57.859752 1084992 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:57.859775 1084992 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:57.859787 1084992 main.go:141] libmachine: (functional-044661) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044661 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0ceb7722199a9f35ba1d54aac302f850653fc0889bfb471f1c7105550fe4e484
repoDigests:
- localhost/minikube-local-cache-test@sha256:79d3b26a6f3ad2503235495ebbbfa0f3d75ec53cc32d5eac110e09fb9fbe2bba
repoTags:
- localhost/minikube-local-cache-test:functional-044661
size: "3345"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-044661
size: "34114467"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044661 image ls --format yaml --alsologtostderr:
I0318 13:04:57.424339 1084937 out.go:291] Setting OutFile to fd 1 ...
I0318 13:04:57.424481 1084937 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.424496 1084937 out.go:304] Setting ErrFile to fd 2...
I0318 13:04:57.424503 1084937 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.424732 1084937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
I0318 13:04:57.425276 1084937 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.425380 1084937 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.425737 1084937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.425777 1084937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.441024 1084937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
I0318 13:04:57.441483 1084937 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.442142 1084937 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.442170 1084937 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.442573 1084937 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.442831 1084937 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:57.444977 1084937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.445039 1084937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.460679 1084937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
I0318 13:04:57.461211 1084937 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.461802 1084937 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.461841 1084937 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.462227 1084937 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.462466 1084937 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:57.462710 1084937 ssh_runner.go:195] Run: systemctl --version
I0318 13:04:57.462741 1084937 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:57.465870 1084937 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.466318 1084937 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:57.466360 1084937 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.466536 1084937 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:57.466744 1084937 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:57.466964 1084937 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:57.467406 1084937 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:57.558680 1084937 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 13:04:57.621581 1084937 main.go:141] libmachine: Making call to close driver server
I0318 13:04:57.621597 1084937 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:57.621928 1084937 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:57.621967 1084937 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:04:57.621977 1084937 main.go:141] libmachine: Making call to close driver server
I0318 13:04:57.621985 1084937 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:04:57.622281 1084937 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:04:57.622296 1084937 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh pgrep buildkitd: exit status 1 (219.770877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image build -t localhost/my-image:functional-044661 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image build -t localhost/my-image:functional-044661 testdata/build --alsologtostderr: (2.184722141s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044661 image build -t localhost/my-image:functional-044661 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3854119448a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-044661
--> 4a82f847f89
Successfully tagged localhost/my-image:functional-044661
4a82f847f891d3560fa86febed92a97fda409dd389eda88f90435fbabffe5ed1
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044661 image build -t localhost/my-image:functional-044661 testdata/build --alsologtostderr:
I0318 13:04:57.895286 1085036 out.go:291] Setting OutFile to fd 1 ...
I0318 13:04:57.895624 1085036 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.895680 1085036 out.go:304] Setting ErrFile to fd 2...
I0318 13:04:57.895693 1085036 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 13:04:57.895984 1085036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
I0318 13:04:57.896770 1085036 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.897636 1085036 config.go:182] Loaded profile config "functional-044661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 13:04:57.898180 1085036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.898230 1085036 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.915270 1085036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
I0318 13:04:57.915915 1085036 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.916569 1085036 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.916604 1085036 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.917014 1085036 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.917203 1085036 main.go:141] libmachine: (functional-044661) Calling .GetState
I0318 13:04:57.919249 1085036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 13:04:57.919298 1085036 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 13:04:57.935090 1085036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
I0318 13:04:57.935551 1085036 main.go:141] libmachine: () Calling .GetVersion
I0318 13:04:57.936112 1085036 main.go:141] libmachine: Using API Version  1
I0318 13:04:57.936138 1085036 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 13:04:57.936561 1085036 main.go:141] libmachine: () Calling .GetMachineName
I0318 13:04:57.936777 1085036 main.go:141] libmachine: (functional-044661) Calling .DriverName
I0318 13:04:57.937027 1085036 ssh_runner.go:195] Run: systemctl --version
I0318 13:04:57.937060 1085036 main.go:141] libmachine: (functional-044661) Calling .GetSSHHostname
I0318 13:04:57.940173 1085036 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.940683 1085036 main.go:141] libmachine: (functional-044661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:12:98", ip: ""} in network mk-functional-044661: {Iface:virbr1 ExpiryTime:2024-03-18 13:54:41 +0000 UTC Type:0 Mac:52:54:00:18:12:98 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-044661 Clientid:01:52:54:00:18:12:98}
I0318 13:04:57.940718 1085036 main.go:141] libmachine: (functional-044661) DBG | domain functional-044661 has defined IP address 192.168.39.198 and MAC address 52:54:00:18:12:98 in network mk-functional-044661
I0318 13:04:57.940888 1085036 main.go:141] libmachine: (functional-044661) Calling .GetSSHPort
I0318 13:04:57.941109 1085036 main.go:141] libmachine: (functional-044661) Calling .GetSSHKeyPath
I0318 13:04:57.941273 1085036 main.go:141] libmachine: (functional-044661) Calling .GetSSHUsername
I0318 13:04:57.941444 1085036 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/functional-044661/id_rsa Username:docker}
I0318 13:04:58.023308 1085036 build_images.go:161] Building image from path: /tmp/build.1623410327.tar
I0318 13:04:58.023395 1085036 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 13:04:58.037622 1085036 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1623410327.tar
I0318 13:04:58.042873 1085036 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1623410327.tar: stat -c "%s %y" /var/lib/minikube/build/build.1623410327.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1623410327.tar': No such file or directory
I0318 13:04:58.042908 1085036 ssh_runner.go:362] scp /tmp/build.1623410327.tar --> /var/lib/minikube/build/build.1623410327.tar (3072 bytes)
I0318 13:04:58.079371 1085036 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1623410327
I0318 13:04:58.102180 1085036 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1623410327 -xf /var/lib/minikube/build/build.1623410327.tar
I0318 13:04:58.121353 1085036 crio.go:297] Building image: /var/lib/minikube/build/build.1623410327
I0318 13:04:58.121426 1085036 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-044661 /var/lib/minikube/build/build.1623410327 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0318 13:04:59.987230 1085036 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-044661 /var/lib/minikube/build/build.1623410327 --cgroup-manager=cgroupfs: (1.865769735s)
I0318 13:04:59.987325 1085036 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1623410327
I0318 13:04:59.999891 1085036 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1623410327.tar
I0318 13:05:00.012465 1085036 build_images.go:217] Built localhost/my-image:functional-044661 from /tmp/build.1623410327.tar
I0318 13:05:00.012518 1085036 build_images.go:133] succeeded building to: functional-044661
I0318 13:05:00.012558 1085036 build_images.go:134] failed building to: 
I0318 13:05:00.012688 1085036 main.go:141] libmachine: Making call to close driver server
I0318 13:05:00.012715 1085036 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:05:00.013031 1085036 main.go:141] libmachine: (functional-044661) DBG | Closing plugin on server side
I0318 13:05:00.013056 1085036 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:05:00.013072 1085036 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 13:05:00.013088 1085036 main.go:141] libmachine: Making call to close driver server
I0318 13:05:00.013098 1085036 main.go:141] libmachine: (functional-044661) Calling .Close
I0318 13:05:00.013361 1085036 main.go:141] libmachine: Successfully made call to close driver server
I0318 13:05:00.013381 1085036 main.go:141] libmachine: (functional-044661) DBG | Closing plugin on server side
I0318 13:05:00.013396 1085036 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.521211748s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-044661
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.198:32497
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image load --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image load --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr: (6.532527698s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "313.451867ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "72.692178ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "314.665721ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "67.182637ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdany-port1969921394/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710767074831701661" to /tmp/TestFunctionalparallelMountCmdany-port1969921394/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710767074831701661" to /tmp/TestFunctionalparallelMountCmdany-port1969921394/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710767074831701661" to /tmp/TestFunctionalparallelMountCmdany-port1969921394/001/test-1710767074831701661
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.764688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 18 13:04 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 18 13:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 18 13:04 test-1710767074831701661
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh cat /mount-9p/test-1710767074831701661
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-044661 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d066ed88-aef1-43ad-a01f-c570dd6c903d] Pending
helpers_test.go:344: "busybox-mount" [d066ed88-aef1-43ad-a01f-c570dd6c903d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d066ed88-aef1-43ad-a01f-c570dd6c903d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d066ed88-aef1-43ad-a01f-c570dd6c903d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.004461815s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-044661 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdany-port1969921394/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image load --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image load --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr: (5.593087988s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.252354029s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-044661
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image load --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image load --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr: (4.218116331s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdspecific-port1318059214/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.509526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdspecific-port1318059214/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh "sudo umount -f /mount-9p": exit status 1 (226.425228ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-044661 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdspecific-port1318059214/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image save gcr.io/google-containers/addon-resizer:functional-044661 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image save gcr.io/google-containers/addon-resizer:functional-044661 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.145000374s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image rm gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T" /mount1: exit status 1 (329.760935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-044661 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup537190675/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.319068956s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-044661
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-044661 image save --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-044661 image save --daemon gcr.io/google-containers/addon-resizer:functional-044661 --alsologtostderr: (1.109813453s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-044661
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-044661
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-044661
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-044661
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (214.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-942957 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:12:37.318790 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-942957 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m34.202548582s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (214.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-942957 -- rollout status deployment/busybox: (3.282365293s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-9qmdx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-b64gc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-h4q2t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-9qmdx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-b64gc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-h4q2t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-9qmdx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-b64gc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-h4q2t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-9qmdx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-9qmdx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-b64gc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-b64gc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-h4q2t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-942957 -- exec busybox-5b5d89c9d6-h4q2t -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-942957 -v=7 --alsologtostderr
E0318 13:14:00.369462 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:14:17.919276 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:17.924612 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:17.934942 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:17.955298 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:17.995699 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:18.076084 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:18.236345 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:18.556701 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:19.197094 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:20.477272 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:14:23.038072 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-942957 -v=7 --alsologtostderr: (49.23632213s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-942957 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp testdata/cp-test.txt ha-942957:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957:/home/docker/cp-test.txt ha-942957-m02:/home/docker/cp-test_ha-942957_ha-942957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test_ha-942957_ha-942957-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957:/home/docker/cp-test.txt ha-942957-m03:/home/docker/cp-test_ha-942957_ha-942957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test_ha-942957_ha-942957-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957:/home/docker/cp-test.txt ha-942957-m04:/home/docker/cp-test_ha-942957_ha-942957-m04.txt
E0318 13:14:28.158931 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test_ha-942957_ha-942957-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp testdata/cp-test.txt ha-942957-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m02:/home/docker/cp-test.txt ha-942957:/home/docker/cp-test_ha-942957-m02_ha-942957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test_ha-942957-m02_ha-942957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m02:/home/docker/cp-test.txt ha-942957-m03:/home/docker/cp-test_ha-942957-m02_ha-942957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test_ha-942957-m02_ha-942957-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m02:/home/docker/cp-test.txt ha-942957-m04:/home/docker/cp-test_ha-942957-m02_ha-942957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test_ha-942957-m02_ha-942957-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp testdata/cp-test.txt ha-942957-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt ha-942957:/home/docker/cp-test_ha-942957-m03_ha-942957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test_ha-942957-m03_ha-942957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt ha-942957-m02:/home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test_ha-942957-m03_ha-942957-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m03:/home/docker/cp-test.txt ha-942957-m04:/home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test_ha-942957-m03_ha-942957-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp testdata/cp-test.txt ha-942957-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile666867504/001/cp-test_ha-942957-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt ha-942957:/home/docker/cp-test_ha-942957-m04_ha-942957.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957 "sudo cat /home/docker/cp-test_ha-942957-m04_ha-942957.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt ha-942957-m02:/home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m02 "sudo cat /home/docker/cp-test_ha-942957-m04_ha-942957-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 cp ha-942957-m04:/home/docker/cp-test.txt ha-942957-m03:/home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m04 "sudo cat /home/docker/cp-test.txt"
E0318 13:14:38.399562 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 ssh -n ha-942957-m03 "sudo cat /home/docker/cp-test_ha-942957-m04_ha-942957-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0318 13:17:01.761928 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.504136558s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-942957 node delete m03 -v=7 --alsologtostderr: (16.874190669s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-942957 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:27:37.321128 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:29:17.918757 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:30:40.369875 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 13:30:40.963236 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
E0318 13:32:37.318831 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-942957 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.874659092s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-942957 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-942957 --control-plane -v=7 --alsologtostderr: (1m14.695800044s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-942957 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-701993 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0318 13:34:17.919276 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-701993 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.225392539s)
--- PASS: TestJSONOutput/start/Command (98.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-701993 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-701993 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-701993 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-701993 --output=json --user=testUser: (7.456839624s)
--- PASS: TestJSONOutput/stop/Command (7.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-076265 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-076265 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.09637ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3925677-da72-4608-8bbd-1b527a5e5a2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-076265] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e85c059-d5e9-4fbb-be9b-65c889e89f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18427"}}
	{"specversion":"1.0","id":"fa2f2454-27ef-41b6-a2ea-c60511b56785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"07e68995-766b-46e1-b642-69aec40cd32b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig"}}
	{"specversion":"1.0","id":"cffd8619-1774-41d2-8551-2043db076166","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube"}}
	{"specversion":"1.0","id":"d68bb627-f805-4e28-ba67-b4f5e7eabd0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0bb8c0aa-f25e-4188-a6e5-fbc30ada2af7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7ded4273-6add-4ff2-a6b9-262e2bd7b676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-076265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-076265
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (94.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-152609 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-152609 --driver=kvm2  --container-runtime=crio: (46.43129617s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-155717 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-155717 --driver=kvm2  --container-runtime=crio: (45.120304884s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-152609
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-155717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-155717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-155717
E0318 13:37:37.319162 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "first-152609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-152609
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-152609: (1.02273843s)
--- PASS: TestMinikubeProfile (94.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-627238 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-627238 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.891210982s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-627238 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-627238 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-646760 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-646760 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.759699012s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-646760 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-646760 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-627238 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-646760 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-646760 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-646760
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-646760: (1.303748806s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-646760
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-646760: (21.501218787s)
--- PASS: TestMountStart/serial/RestartStopped (22.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-646760 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-646760 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-994669 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:39:17.918450 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-994669 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.890558436s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-994669 -- rollout status deployment/busybox: (2.820046036s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-4nbjw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-8cd7k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-4nbjw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-8cd7k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-4nbjw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-8cd7k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-4nbjw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-4nbjw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-8cd7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-994669 -- exec busybox-5b5d89c9d6-8cd7k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-994669 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-994669 -v 3 --alsologtostderr: (44.036964195s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-994669 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp testdata/cp-test.txt multinode-994669:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile486103846/001/cp-test_multinode-994669.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669:/home/docker/cp-test.txt multinode-994669-m02:/home/docker/cp-test_multinode-994669_multinode-994669-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m02 "sudo cat /home/docker/cp-test_multinode-994669_multinode-994669-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669:/home/docker/cp-test.txt multinode-994669-m03:/home/docker/cp-test_multinode-994669_multinode-994669-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m03 "sudo cat /home/docker/cp-test_multinode-994669_multinode-994669-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp testdata/cp-test.txt multinode-994669-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile486103846/001/cp-test_multinode-994669-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt multinode-994669:/home/docker/cp-test_multinode-994669-m02_multinode-994669.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669 "sudo cat /home/docker/cp-test_multinode-994669-m02_multinode-994669.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669-m02:/home/docker/cp-test.txt multinode-994669-m03:/home/docker/cp-test_multinode-994669-m02_multinode-994669-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m03 "sudo cat /home/docker/cp-test_multinode-994669-m02_multinode-994669-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp testdata/cp-test.txt multinode-994669-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile486103846/001/cp-test_multinode-994669-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt multinode-994669:/home/docker/cp-test_multinode-994669-m03_multinode-994669.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669 "sudo cat /home/docker/cp-test_multinode-994669-m03_multinode-994669.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 cp multinode-994669-m03:/home/docker/cp-test.txt multinode-994669-m02:/home/docker/cp-test_multinode-994669-m03_multinode-994669-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 ssh -n multinode-994669-m02 "sudo cat /home/docker/cp-test_multinode-994669-m03_multinode-994669-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-994669 node stop m03: (1.590624684s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-994669 status: exit status 7 (456.029365ms)

                                                
                                                
-- stdout --
	multinode-994669
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994669-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994669-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-994669 status --alsologtostderr: exit status 7 (448.950819ms)

                                                
                                                
-- stdout --
	multinode-994669
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994669-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994669-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:46.082986 1101555 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:46.083509 1101555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:46.083529 1101555 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:46.083535 1101555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:46.084039 1101555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 13:41:46.084369 1101555 out.go:298] Setting JSON to false
	I0318 13:41:46.084501 1101555 notify.go:220] Checking for updates...
	I0318 13:41:46.084561 1101555 mustload.go:65] Loading cluster: multinode-994669
	I0318 13:41:46.085243 1101555 config.go:182] Loaded profile config "multinode-994669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:41:46.085275 1101555 status.go:255] checking status of multinode-994669 ...
	I0318 13:41:46.085711 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.085755 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.104949 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0318 13:41:46.105424 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.106026 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.106063 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.106375 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.106563 1101555 main.go:141] libmachine: (multinode-994669) Calling .GetState
	I0318 13:41:46.108213 1101555 status.go:330] multinode-994669 host status = "Running" (err=<nil>)
	I0318 13:41:46.108232 1101555 host.go:66] Checking if "multinode-994669" exists ...
	I0318 13:41:46.108529 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.108569 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.124807 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I0318 13:41:46.125306 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.125783 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.125808 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.126132 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.126310 1101555 main.go:141] libmachine: (multinode-994669) Calling .GetIP
	I0318 13:41:46.129363 1101555 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:41:46.129844 1101555 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:41:46.129886 1101555 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:41:46.129973 1101555 host.go:66] Checking if "multinode-994669" exists ...
	I0318 13:41:46.130277 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.130324 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.146891 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0318 13:41:46.147313 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.147776 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.147795 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.148138 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.148351 1101555 main.go:141] libmachine: (multinode-994669) Calling .DriverName
	I0318 13:41:46.148588 1101555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:41:46.148630 1101555 main.go:141] libmachine: (multinode-994669) Calling .GetSSHHostname
	I0318 13:41:46.151224 1101555 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:41:46.151665 1101555 main.go:141] libmachine: (multinode-994669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1e:08", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:16 +0000 UTC Type:0 Mac:52:54:00:23:1e:08 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-994669 Clientid:01:52:54:00:23:1e:08}
	I0318 13:41:46.151705 1101555 main.go:141] libmachine: (multinode-994669) DBG | domain multinode-994669 has defined IP address 192.168.39.57 and MAC address 52:54:00:23:1e:08 in network mk-multinode-994669
	I0318 13:41:46.151815 1101555 main.go:141] libmachine: (multinode-994669) Calling .GetSSHPort
	I0318 13:41:46.152003 1101555 main.go:141] libmachine: (multinode-994669) Calling .GetSSHKeyPath
	I0318 13:41:46.152155 1101555 main.go:141] libmachine: (multinode-994669) Calling .GetSSHUsername
	I0318 13:41:46.152273 1101555 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669/id_rsa Username:docker}
	I0318 13:41:46.236232 1101555 ssh_runner.go:195] Run: systemctl --version
	I0318 13:41:46.243059 1101555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:41:46.259118 1101555 kubeconfig.go:125] found "multinode-994669" server: "https://192.168.39.57:8443"
	I0318 13:41:46.259160 1101555 api_server.go:166] Checking apiserver status ...
	I0318 13:41:46.259208 1101555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:41:46.272706 1101555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1103/cgroup
	W0318 13:41:46.282516 1101555 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1103/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:41:46.282590 1101555 ssh_runner.go:195] Run: ls
	I0318 13:41:46.287132 1101555 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0318 13:41:46.291686 1101555 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0318 13:41:46.291709 1101555 status.go:422] multinode-994669 apiserver status = Running (err=<nil>)
	I0318 13:41:46.291720 1101555 status.go:257] multinode-994669 status: &{Name:multinode-994669 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:41:46.291740 1101555 status.go:255] checking status of multinode-994669-m02 ...
	I0318 13:41:46.292061 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.292097 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.308661 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0318 13:41:46.309199 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.309698 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.309721 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.310034 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.310239 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .GetState
	I0318 13:41:46.311973 1101555 status.go:330] multinode-994669-m02 host status = "Running" (err=<nil>)
	I0318 13:41:46.311993 1101555 host.go:66] Checking if "multinode-994669-m02" exists ...
	I0318 13:41:46.312334 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.312384 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.328376 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I0318 13:41:46.328831 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.329384 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.329409 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.329744 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.329941 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .GetIP
	I0318 13:41:46.332920 1101555 main.go:141] libmachine: (multinode-994669-m02) DBG | domain multinode-994669-m02 has defined MAC address 52:54:00:1a:de:17 in network mk-multinode-994669
	I0318 13:41:46.333330 1101555 main.go:141] libmachine: (multinode-994669-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:de:17", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:40:20 +0000 UTC Type:0 Mac:52:54:00:1a:de:17 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-994669-m02 Clientid:01:52:54:00:1a:de:17}
	I0318 13:41:46.333373 1101555 main.go:141] libmachine: (multinode-994669-m02) DBG | domain multinode-994669-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:1a:de:17 in network mk-multinode-994669
	I0318 13:41:46.333561 1101555 host.go:66] Checking if "multinode-994669-m02" exists ...
	I0318 13:41:46.333891 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.333934 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.350964 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0318 13:41:46.351445 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.351999 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.352021 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.352354 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.352551 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .DriverName
	I0318 13:41:46.352731 1101555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:41:46.352756 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .GetSSHHostname
	I0318 13:41:46.355494 1101555 main.go:141] libmachine: (multinode-994669-m02) DBG | domain multinode-994669-m02 has defined MAC address 52:54:00:1a:de:17 in network mk-multinode-994669
	I0318 13:41:46.355872 1101555 main.go:141] libmachine: (multinode-994669-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:de:17", ip: ""} in network mk-multinode-994669: {Iface:virbr1 ExpiryTime:2024-03-18 14:40:20 +0000 UTC Type:0 Mac:52:54:00:1a:de:17 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-994669-m02 Clientid:01:52:54:00:1a:de:17}
	I0318 13:41:46.355899 1101555 main.go:141] libmachine: (multinode-994669-m02) DBG | domain multinode-994669-m02 has defined IP address 192.168.39.183 and MAC address 52:54:00:1a:de:17 in network mk-multinode-994669
	I0318 13:41:46.356027 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .GetSSHPort
	I0318 13:41:46.356174 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .GetSSHKeyPath
	I0318 13:41:46.356335 1101555 main.go:141] libmachine: (multinode-994669-m02) Calling .GetSSHUsername
	I0318 13:41:46.356491 1101555 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18427-1067917/.minikube/machines/multinode-994669-m02/id_rsa Username:docker}
	I0318 13:41:46.439252 1101555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:41:46.454098 1101555 status.go:257] multinode-994669-m02 status: &{Name:multinode-994669-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:41:46.454134 1101555 status.go:255] checking status of multinode-994669-m03 ...
	I0318 13:41:46.454503 1101555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:46.454553 1101555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:46.471446 1101555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0318 13:41:46.471960 1101555 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:46.472518 1101555 main.go:141] libmachine: Using API Version  1
	I0318 13:41:46.472548 1101555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:46.472880 1101555 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:46.473086 1101555 main.go:141] libmachine: (multinode-994669-m03) Calling .GetState
	I0318 13:41:46.474699 1101555 status.go:330] multinode-994669-m03 host status = "Stopped" (err=<nil>)
	I0318 13:41:46.474710 1101555 status.go:343] host is not running, skipping remaining checks
	I0318 13:41:46.474717 1101555 status.go:257] multinode-994669-m03 status: &{Name:multinode-994669-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-994669 node start m03 -v=7 --alsologtostderr: (29.209739818s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-994669 node delete m03: (1.937196807s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (170.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-994669 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:52:37.319568 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-994669 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m49.619701416s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-994669 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (170.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-994669
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-994669-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-994669-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.688494ms)

                                                
                                                
-- stdout --
	* [multinode-994669-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-994669-m02' is duplicated with machine name 'multinode-994669-m02' in profile 'multinode-994669'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-994669-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-994669-m03 --driver=kvm2  --container-runtime=crio: (42.744859255s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-994669
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-994669: exit status 80 (244.230414ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-994669 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-994669-m03 already exists in multinode-994669-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-994669-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.97s)

                                                
                                    
x
+
TestScheduledStopUnix (115.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-095060 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-095060 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.512497001s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095060 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-095060 -n scheduled-stop-095060
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095060 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095060 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095060 -n scheduled-stop-095060
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095060
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095060 --schedule 15s
E0318 13:59:17.919064 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095060
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-095060: exit status 7 (81.108797ms)

                                                
                                                
-- stdout --
	scheduled-stop-095060
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095060 -n scheduled-stop-095060
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095060 -n scheduled-stop-095060: exit status 7 (80.652533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-095060
--- PASS: TestScheduledStopUnix (115.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (247.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.788553434 start -p running-upgrade-210993 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.788553434 start -p running-upgrade-210993 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.597382126s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-210993 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-210993 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m56.397679544s)
helpers_test.go:175: Cleaning up "running-upgrade-210993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-210993
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-210993: (1.186853495s)
--- PASS: TestRunningBinaryUpgrade (247.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091972 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-091972 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (102.860069ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-091972] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091972 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091972 --driver=kvm2  --container-runtime=crio: (1m36.607054719s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-091972 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-059272 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-059272 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (155.332594ms)

                                                
                                                
-- stdout --
	* [false-059272] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 14:00:56.383007 1108325 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:00:56.383200 1108325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:00:56.383214 1108325 out.go:304] Setting ErrFile to fd 2...
	I0318 14:00:56.383221 1108325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:00:56.383563 1108325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18427-1067917/.minikube/bin
	I0318 14:00:56.384462 1108325 out.go:298] Setting JSON to false
	I0318 14:00:56.385924 1108325 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":20603,"bootTime":1710749853,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:00:56.386033 1108325 start.go:139] virtualization: kvm guest
	I0318 14:00:56.388540 1108325 out.go:177] * [false-059272] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:00:56.390351 1108325 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 14:00:56.390410 1108325 notify.go:220] Checking for updates...
	I0318 14:00:56.391686 1108325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:00:56.393245 1108325 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18427-1067917/kubeconfig
	I0318 14:00:56.395084 1108325 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18427-1067917/.minikube
	I0318 14:00:56.396639 1108325 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:00:56.398219 1108325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:00:56.400351 1108325 config.go:182] Loaded profile config "NoKubernetes-091972": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:00:56.400544 1108325 config.go:182] Loaded profile config "offline-crio-096581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:00:56.400659 1108325 config.go:182] Loaded profile config "running-upgrade-210993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0318 14:00:56.400839 1108325 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:00:56.452191 1108325 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:00:56.453603 1108325 start.go:297] selected driver: kvm2
	I0318 14:00:56.453635 1108325 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:00:56.453653 1108325 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:00:56.455924 1108325 out.go:177] 
	W0318 14:00:56.457225 1108325 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0318 14:00:56.458587 1108325 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-059272 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-059272" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-059272

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-059272"

                                                
                                                
----------------------- debugLogs end: false-059272 [took: 6.305562648s] --------------------------------
helpers_test.go:175: Cleaning up "false-059272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-059272
--- PASS: TestNetworkPlugins/group/false (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091972 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091972 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.562853208s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-091972 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-091972 status -o json: exit status 2 (267.643978ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-091972","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-091972
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-091972: (1.067930191s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (54.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091972 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091972 --no-kubernetes --driver=kvm2  --container-runtime=crio: (54.580277812s)
--- PASS: TestNoKubernetes/serial/Start (54.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-091972 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-091972 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.277376ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (13.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (12.840469868s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (13.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-091972
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-091972: (1.379807368s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091972 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091972 --driver=kvm2  --container-runtime=crio: (30.062006993s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-091972 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-091972 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.280812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (99.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-263134 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0318 14:04:00.371350 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 14:04:00.964913 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-263134 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m39.688943806s)
--- PASS: TestPause/serial/Start (99.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1718332683 start -p stopped-upgrade-563366 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0318 14:04:17.918548 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/functional-044661/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1718332683 start -p stopped-upgrade-563366 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m8.384111606s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1718332683 -p stopped-upgrade-563366 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1718332683 -p stopped-upgrade-563366 stop: (2.141974149s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-563366 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-563366 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.304826045s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-263134 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-263134 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.495078459s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (55.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-563366
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (127.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m7.937929175s)
--- PASS: TestNetworkPlugins/group/auto/Start (127.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-263134 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-263134 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-263134 --output=json --layout=cluster: exit status 2 (327.023858ms)

                                                
                                                
-- stdout --
	{"Name":"pause-263134","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-263134","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-263134 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-263134 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-263134 --alsologtostderr -v=5: (1.17367564s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-263134 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-263134 --alsologtostderr -v=5: (1.176266455s)
--- PASS: TestPause/serial/DeletePaused (1.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.948940669s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (124.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0318 14:07:37.319578 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m4.666435416s)
--- PASS: TestNetworkPlugins/group/calico/Start (124.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qqwp7" [0a3ad447-fbdc-492c-8bbb-7532ec4ceefa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008228483s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xshlp" [8a44632e-8238-466b-b233-b5aee6d9e5d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xshlp" [8a44632e-8238-466b-b233-b5aee6d9e5d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005526677s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lxrgq" [b7c4f696-8569-4d9b-943e-6db3a67a5fce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lxrgq" [b7c4f696-8569-4d9b-943e-6db3a67a5fce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.00513797s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m23.992784066s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.09659456s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (136.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m16.783058571s)
--- PASS: TestNetworkPlugins/group/flannel/Start (136.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7z6fs" [e7dd50b9-0706-4748-8d10-e4b55d1de1d0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006290148s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zzxmp" [dafb67fd-fc79-48a5-961e-10cb7f4b0ef6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zzxmp" [dafb67fd-fc79-48a5-961e-10cb7f4b0ef6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004632756s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (109.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-059272 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m49.526429584s)
--- PASS: TestNetworkPlugins/group/bridge/Start (109.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z278n" [df76ca07-bafb-4a36-871f-e5017be39aca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z278n" [df76ca07-bafb-4a36-871f-e5017be39aca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005698998s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z77g6" [9745e239-00a1-46c6-b760-a7fb5c5f961e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z77g6" [9745e239-00a1-46c6-b760-a7fb5c5f961e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006587783s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-188109 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-188109 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m12.27651508s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dljdr" [d7a87402-bd48-40b0-a66a-365480b6c198] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006114664s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-59scv" [28bf9600-d432-4470-b8b7-dbf9d41826ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-59scv" [28bf9600-d432-4470-b8b7-dbf9d41826ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.004541408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-059272 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-059272 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c969q" [7c120a30-c979-4e6c-b3c4-9150d8762d72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c969q" [7c120a30-c979-4e6c-b3c4-9150d8762d72] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004330813s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-059272 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)
E0318 14:41:15.725392 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:41:25.564699 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-059272 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-767719 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-767719 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m42.456226829s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (123.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-075922 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 14:12:37.319357 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
E0318 14:13:10.147256 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.152588 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.162882 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.183260 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.223604 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.303937 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.464451 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:10.785043 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:11.426050 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:12.706681 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-075922 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m3.519801144s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (123.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-188109 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9a84917c-11c7-498f-8ee7-199f260a8d48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0318 14:13:15.267879 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9a84917c-11c7-498f-8ee7-199f260a8d48] Running
E0318 14:13:18.301617 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.306976 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.317282 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.337608 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.377892 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.458743 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.619289 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:18.940058 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:19.581132 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
E0318 14:13:20.388461 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/kindnet-059272/client.crt: no such file or directory
E0318 14:13:20.862051 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004847329s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-188109 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-188109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0318 14:13:23.422897 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-188109 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767719 create -f testdata/busybox.yaml
E0318 14:13:38.783948 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1d8fa12-7aec-4055-b62a-f7d97b5c417b] Pending
helpers_test.go:344: "busybox" [e1d8fa12-7aec-4055-b62a-f7d97b5c417b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e1d8fa12-7aec-4055-b62a-f7d97b5c417b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004443216s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767719 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-767719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-767719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037687613s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-767719 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 create -f testdata/busybox.yaml
E0318 14:14:00.594442 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:00.674758 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e9ab9169-83c3-4ad1-b02c-d01fc3fda2a4] Pending
E0318 14:14:00.835787 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:01.156515 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e9ab9169-83c3-4ad1-b02c-d01fc3fda2a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0318 14:14:01.796742 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:14:03.077739 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e9ab9169-83c3-4ad1-b02c-d01fc3fda2a4] Running
E0318 14:14:05.638218 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004864538s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-075922 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-075922 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.00939511s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-075922 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (701.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-188109 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0318 14:16:02.145738 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/auto-059272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-188109 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m41.500655496s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188109 -n no-preload-188109
E0318 14:27:37.318647 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (701.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (599.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-767719 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 14:16:20.844913 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:25.565590 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.570871 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.581121 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.601392 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.641680 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.722055 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.882549 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:25.966090 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:16:26.203482 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:26.844549 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:28.125467 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-767719 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m59.637833478s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767719 -n embed-certs-767719
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (599.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-075922 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 14:16:44.362067 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/calico-059272/client.crt: no such file or directory
E0318 14:16:46.047538 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
E0318 14:16:54.160427 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/enable-default-cni-059272/client.crt: no such file or directory
E0318 14:16:56.687546 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/flannel-059272/client.crt: no such file or directory
E0318 14:17:06.528011 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/bridge-059272/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-075922 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m0.542675763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075922 -n default-k8s-diff-port-075922
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-782728 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-782728 --alsologtostderr -v=3: (2.371043998s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-782728 -n old-k8s-version-782728: exit status 7 (85.118782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-782728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-997491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-997491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (56.987171072s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-997491 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-997491 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.121660653s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-997491 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-997491 --alsologtostderr -v=3: (11.356918569s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-997491 -n newest-cni-997491
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-997491 -n newest-cni-997491: exit status 7 (94.764891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-997491 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-997491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-997491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (38.990849013s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-997491 -n newest-cni-997491
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-997491 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-997491 --alsologtostderr -v=1
E0318 14:42:37.319083 1075208 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18427-1067917/.minikube/profiles/addons-106685/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-997491 -n newest-cni-997491
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-997491 -n newest-cni-997491: exit status 2 (250.169203ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-997491 -n newest-cni-997491
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-997491 -n newest-cni-997491: exit status 2 (251.371542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-997491 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-997491 -n newest-cni-997491
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-997491 -n newest-cni-997491
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.60s)

                                                
                                    

Test skip (39/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 3.92
271 TestNetworkPlugins/group/cilium 4.26
277 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-059272 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-059272" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-059272

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-059272"

                                                
                                                
----------------------- debugLogs end: kubenet-059272 [took: 3.710486855s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-059272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-059272
--- SKIP: TestNetworkPlugins/group/kubenet (3.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-059272 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-059272" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-059272

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-059272" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-059272"

                                                
                                                
----------------------- debugLogs end: cilium-059272 [took: 4.09409944s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-059272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-059272
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-784874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-784874
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard